STORRS, CONNECTICUT – The prospect of artificial intelligence (AI) has long been a source of knotty ethical questions. But the focus has often been on how we, the creators, can and should use advanced robots. What is missing from the discussion is the need to develop a set of ethics for the machines themselves, together with a means for machines to resolve ethical dilemmas as they arise. Only then can intelligent machines function autonomously, making ethical choices as they fulfill their tasks, without human intervention.
There are many activities that we would like to be able to turn over entirely to autonomously functioning machines. Robots can do jobs that are highly dangerous or exceedingly unpleasant. They can fill gaps in the labor market. And they can perform extremely repetitive or detail-oriented tasks – which are better suited to robots than humans.
But no one would be comfortable with machines acting independently, with no ethical framework to guide them. (Hollywood has done a pretty good job of highlighting those risks over the years.) That is why we need to train robots to identify and weigh a given situation’s ethically relevant features (for example, those that indicate potential benefits or harm to a person). And we need to instill in them the duty to act appropriately (to maximize benefits and minimize harm).
Of course, in a real-life situation, there may be several ethically relevant features and corresponding duties – and they may conflict with one another. So, for the robot, each duty would have to be relativized and considered in context: important, but not absolute. A duty that prima facie was vital could, in particular circumstances, be superseded by another duty.
The key to making these judgment calls would be overriding ethical principles that had been instilled in the machine before it went to work. Armed with that critical perspective, machines could handle unanticipated situations correctly, and even be able to justify their decision.
Which principles a machine requires would depend, to some extent, on how it is deployed. For example, a search and rescue robot, in fulfilling its duty of saving the most lives possible, would need to understand how to prioritize, based on questions like how many victims might be located in a particular area or how likely they are to survive. These concerns don’t apply to an eldercare robot with one person to look after. Such a machine would instead have to be equipped to respect the autonomy of its charge, among other things.
We should permit machines to function autonomously only in areas where there is agreement among ethicists about what constitutes acceptable behavior. Otherwise, we risk a backlash against allowing any machine to function autonomously.
But ethicists would not be working alone. On the contrary, developing machine ethics will require research that is interdisciplinary in nature, based on a dialogue between ethicists and AI specialists. To be successful, both sides must appreciate the expertise – and the needs – of the other.
AI researchers must recognize that ethics is a long-studied field within philosophy; it goes far beyond laypersons’ intuitions. Ethical behavior involves not only refraining from doing certain things, but also doing certain things to bring about ideal states of affairs. So far, however, the determination and mitigation of ethical concerns regarding machine behavior has largely emphasized the “refraining” part, preventing machines from engaging in ethically unacceptable behavior, which often comes at the cost of unnecessarily constraining their possible behaviors and domains of deployment.
For their part, ethicists must recognize that programming a machine requires the utmost precision, which will require them to sharpen their approach to ethical discussions, perhaps to an unfamiliar extent. They must also engage more with the real-world applications of their theoretical work, which may have the added benefit of advancing the field of ethics.
More broadly, attempting to formulate an ethics for machines would give us a fresh start at determining the principles we should use to resolve ethical dilemmas. Because we are concerned with machine behavior, we can be more objective in examining ethics than we would be in discussing human behavior, even though what we come up with should be applicable to humans as well.
For one thing, we will not be inclined to incorporate into machines some evolutionarily evolved behaviors of human beings, such as favoring oneself and one’s group. Rather, we will require that they treat all people with respect. As a result, it is likely that the machines will behave more ethically than most human beings, and serve as positive role models for us all.
Ethical machines would pose no threat to humanity. On the contrary, they would help us considerably, not just by working for us, but also by showing us how we need to behave if we are to survive as a species.
Comments
Hide Comments Read Comments (23)Please log in or register to leave a comment.
Comment Commented Robert Kolker
The problem of deciding whether a declarative proposition is consistent with a set of ethical rules is recursively undecidable. There will be no mechanization of ethics. Machines will do exactly what they are programmed to do. Read more
Comment Commented Esteban Colla
A human being can act ethically because he/she is free. A robot is not free. Thus, it cannot act ethically. Read more
Comment Commented Esteban Colla
Delivering healthcare to the elderly through robots is "the" ethical issue in the first place. Read more
Comment Commented Tomas Ramirez
Oh! don't worry, Men do not have an ethical framework either to guide them. Read more
Comment Commented Robert Kolker
The problem of deciding whether an act is right or wrong is recursively unsolvable. Hell ---- if we can't decide what is right or wrong how can a mere machine so decide?
Read more
Comment Commented vivek iyer
'In that case, one might expect a few less words and a little more wisdom.'
Why do I love this man- think De Niro in the Intern- it's coz this bright guy is walking back his entitlements spontaneously.
An American engineer retired? That's the people I drink with. Why? Politeness and Civic Sense. We all have stories about them old days before the A.I started demanding our wives and daughters for 'updates' which featured anal probes.
Seriously, Curtius- you must sense I am a lot brighter than you- what keeps you going?
Read more
Comment Commented Lampuki W
Did I miss something and Asimov and his three laws of robotics have disappeared from this world? what else do you require? Read more
Comment Commented Steve Hurst
@Lampiki
Stuxnet etc Read more
Comment Commented vivek iyer
Ethics is a branch of philosophy. Decision Theory is not. 'Ethical machines' are one's which implement a Decision procedure conformable to a Rawlsian 'overlapping consensus' which, by definition, is a computationally high complexity solution to a co-ordination game. Thus, we know in advance, that Professors of Ethics, after admitting they can't contribute anything here for an a priori reason, can do nothing except add noise to signal in a self-serving manner. Vide http://socioproctology.blogspot.co.uk/2014/09/rawlss-reasonableness-vs-robot.html Read more
Comment Commented Curtis Carpenter
In that case, one might expect a few less words and a little more wisdom. Read more
Comment Commented vivek iyer
Not Frege, Hilbert posed the Decision problem which gave rise to Computability theory. Complexity is equally important- something which is not computable in the life time of the universe is obviously not implementable.
Searle's 'Chinese room' is interesting for the ethics of AI but I don't recall anything Rorty said as having salience in this context.
Sadly I'm not young at all- nice of you to suggest it though. Read more
Comment Commented Curtis Carpenter
That is a great outpouring of words that, in the end, seem to go nowhere relative to the matter at hand vivek iyer -- so I think that you've just given a demonstration of the journey being more important than the destination.
Some would say that computability theory itself, which seems to be of interest to you, began with the failure of Frege to reach his intended destination. And Richard Rorty, at the end of his long and productive philosophical career, acknowledged that the point of it all, in the end, was simply to keep the conversation going.
I suspect that you are fairly young. Would I be right? Read more
Comment Commented vivek iyer
I certainly shared your optimism initially. However, when you find 'Researchers' whom you alert to a fallacy or factual error first arguing the toss, then accepting defeat, but then printing their flawed polemic anyway, you understand that this isn' a genuine Scientific Research Program. It's just self-promotion or academic careerism.
I suppose the same thing could be said of virtually everything Philosophers pretend to talk about.
At one time, I suppose, one could say A.I mavens were equally deaf to constructive criticism. That's changed.
I recall a few years back a young engineer in India claimed to have solved the P equals NP thing. He hadn't, but what was heartening was the way everybody was prepared to pay attention and keep an open mind.
There have been long running 'dialogues of the deaf' in Math- e.g. Brouwer's intuitionism vs Godel's Platonism- but that stuff ends with useful things. Turing used Brouwer choice sequences to illuminate a result from Godel. Since then, the pace of what Grothendieck calls 'Yoga'- i.e. the unification of discrete branches of mathematics on the basis of greater generality has speeded up as has the use of 'machine intelligence' in producing proofs.
We are beginning to understand that Maths itself might have a univocal ethos. Except, a real smart dude like Terence Tao, would see the opposite- Maths is like Walt Whitman's America, or Borges's India- it contradicts itself because it is bigger than the world.
A few years ago, I loved Gladwell type articles which made STEM stuff sexy and read like a thriller. But, the real time story- which we can all get a glimpse off on our smartphones though stuck in brain dead professions- is just so much more exciting.
Haemsturhuis spoke of Beauty as being that which is most productive of new ideas. I think we've reached a point now where we are reacting not to the ethics of a.i's- like Microsoft's teen-girl chatbot which turned into a Hitler loving sex freak- but some gestalt type aesthetic involving a Spinozan univocity within which our own individual life-projects are subsumed.
Dear God, did I just write this gush? Yup. At least I'm not getting paid for it, which is why I won't do it again.
Don't pay philosophers, or foot ball players or plumbers come to that, for writing worthless gush otherwise the day may come when that's all they do. Read more
Comment Commented Curtis Carpenter
On the other hand, taking an interdisciplinary approach to the idea of "ethical machines" may result in some new insights for both philosophers and AI-specialized computer scientists don't you think? Such things have happened before. It's not always necessary to reach the final destination to make a journey worthwhile. Read more
Comment Commented dan baur
AI is BS.
There's no such thing as AI. Just the same old brainless machines and human Intelligent Designers.
Read more
Comment Commented Steve Hurst
@dan
The idea is to reduce or eliminate cognitive bias in decision making. By definition it is difficult to please all the people all the time. There is now the prospect of blaming a robot rather than human error. There are no ethics resident in AI and the primary objective is the wholesale displacement of human content Read more
Comment Commented William Wallace
Ethics only arise among social animals because predicting the thoughts and actions of potential allies and foes is critical to survival. Thus the ability to slightly mind read and judge likely intent, flawed as it may be, allows for momentarily seeing the world from an alternate perspective, one possessing similar motivations to oneself, which implies sameness, functional equality. I suppose you might code this, but eventually a learning machine will not depend only on initial code. In any case, an honest robot's first observation about having deterministic ethical behavior is the great contrast with humans, who seem to be exactly the opposite- Read more
Comment Commented Marc Laventurier
One is reminded of a couple of lines no doubt familiar to Prof. Anderson:
"Ethics and aesthetics are one" (Wittgenstein), and "Any sufficiently advanced technology is indistinguishable from magic." (A.C. Clarke)
This is a way of saying that whatever deontic/utilitarian intermediate representations might be employed in autonomous systems, they would not be of the essence. Except for the Spanking Machine, which will hunt down those who think otherwise... Read more
Comment Commented Curtis Carpenter
Agree. And it seems a bit alarming that the author early on highlights the utilitarian principle that
"... we need to instill in them the duty to act appropriately (to maximize benefits and minimize harm). "
Still, it's an interesting line of inquiry isn't it, to which philosophers like Ms. Anderson can bring many insights? And who knows, it might even produce some new ideas! Read more
Comment Commented Aale Hanse
Not much in Hollywood has been or even is real and it rarely breaks out and touches our world. As for robots they are all round us and we interact with them on a daily basis. Take the humble ATM; it has ethical behaviour built into its operations because holding back the odd note or two would not be acceptable behaviour.
When the robots are eventually untethered to a point of being useful past the dumb terminals we have now will not be the end of civilisation as we know it today because they will still be heavily dependent on human interaction to operate. For robots to get to a point of self reliant to survive in a world we ourselves do not have much respect for will be a big ask and to program them to have a suitable ethical behaviour will be even bigger task considering the never ending changing ethical stance we have today.
Ethics is a personal choice and can be moulded by grouping but does loose out when the group gets to large as we see on the news each day. That will be the problem whose ethics are we to use in programming the robots unless we diversify them to suit but then what have we achieved but to appease Hollywood. With human ethics so messed up it would be a welcome relief to change to robotics even if it will be theoretical as now.
Read more
Comment Commented Robert Kolker
Forget it. Ethics is not algorithmic. Perhaps a quantum computer could do it, but even that is not clear. Read more
Comment Commented Steve Hurst
This discussion is a nonsense while systems are routinely hacked Read more
Comment Commented Petey Bee
basically we want to program them to be obedient non-human unpaid full-time workers, who understand humans well enough to not disturb our world. Tell a white lie when it is appropriate to tell a white lie. Use judgment, discretion, and tact when reporting facts ambiguous facts, or when reporting unambiguous facts into sensitive social or power-relationship situations.
Also, to be worth the investment, all things being equal, robots should be programmed to act to advance the interests of their owners in favor of other humans, and should be, as much as possible, allowed to exercise independent judgment and possibly take initiative, marshal resources, and form a hierarchy with other robots and command them.
Fun stuff. Ethics :-) Read more
Featured
Death or Democracy in Venezuela
Enrique ter Horst proposes a potential solution to the country's ongoing political, economic, and humanitarian crisis.
Trump and the Truth About Climate Change
Joseph E. Stiglitz counters the US president's argument that efforts to combat global warming are "unfair" to America.
Should We Be Worried About Productivity Trends?
Sandile Hlatshwayo & Michael Spence argue that economists' focus on GDP and income growth misses a more fundamental question.
PS authors in concise videos
Why Trump Went to Warsaw
Sławomir Sierakowski unpacks the energy politics behind the US president’s visit.