October 3, 2012

Moral Machines: Q&A with Colin Allen

Dr. Colin Allen, Provost Professor of Cognitive Science and History & Philosophy of Science and Director of the Indiana University Cognitive Science Program, will lecture on "Moral Machines" on Sunday, October 7. Here, he discusses some of the ideas surrounding machines and morality.


1. What kind of morality do we attribute to machines – if any? If there isn’t now, will there ever be?
Gort Robot Model 
courtesy of ‘Mr. T in DC’

The "we" in the question is rather broad -- I think there are many people who think that the whole idea of attributing any kind of morality to artificial machines is ludicrous. 

I have argued, however, that we can think of three levels of moral capacity that might be applied to machines. The first and lowest level is "operational" morality, which really means nothing more than that the machines we build embody the moral outlook of the people who designed and deployed them. For example, software already exists that offers advice to hospital administrators and physicians on which patients should receive scarce medical resources; insofar as this software takes some factors into account (e.g., factors predicting likelihood of survival) but not others (e.g. number of dependent children) then it embodies moral assumptions about the importance or unimportance of those factors for making morally responsible decisions about health care. 


The highest level is full human-equivalent moral capacity, and this is a holy grail of artificial intelligence research. I tend to think that this is achievable in the long run, but currently that is so far away that it's in the realm of science fiction. However, in between these two there's an intermediate level that I call "functional morality" in which the system is designed to evaluate actions according to their moral consequences. There are currently a few prototypes for such systems, but nothing in production. However, there are applications being developed for medical care and supervision of elderly patients and in military robotics where systems might be required to evaluate conformity with the Geneva conventions and legal aspects pertaining to combat operations. Will such systems ever be deployed?  I tend to think that functional morality for eldercare systems is more likely than functional morality in systems designed for robotic warfare, but I would not want to bet a lot of money on that.


2. What interests you about morality and the relationship between humans and machines?

Morality is one of those things that people have held up as distinguishing us from other animals. That premise is obvious to some and doubtful to others, but the point is that there's a lot of of interest in just what morality is and how we come to display it because it seems so central to human nature. Some people like to point out that humans are more immoral than moral, that only reinforces the point; to be capable of acting immorally one has to be also capable of acting morally.  We don't think of the shark that attacks a surfer as acting either morally or immorally -- it is amoral. But immoral actions by humans only deserve that label because of the capacity for morality.

Machines add an interesting twist into the fundamentally philosophical question of who and what we are. The challenge of creating artificial moral agents is ultimately about understanding what makes us human.  And whether the project of creating artificial morality succeeds, or whether it fails, it is potentially instructive about the cognitive and emotional processes that drive our own moral behavior.


3. Can machines ever exhibit “better” behavior than humans?

Machines are in principle less distractable, less susceptible to emotion-driven loss of control, and more capable of carrying out longer chains of reasoning or calculation than humans. I think that for these kinds of reasons, machines could do better than humans at foreseeing bad consequences of certain actions, and thus avoiding them.  In these cases, they would exhibit "better" behavior. However, there's more to morality than this, and the machines we have currently lack the kind of powerful perceptual systems and complex pattern recognition of humans, and because they lack proper emotions, they lack the good side of emotional attachment as much as they lack the bad side that can lead to atrocious behavior. Some scientists have argued that since emotions are a net negative influence on our moral behavior, machines will be better than humans (at least in high-stress situations such as battlefields) because they won't be subject to those negative influences.  However, I think that it's far from obvious that emotions are a net negative for morality, and so until we have more convincing models of how emotion and cognition interact in moral behavior it's hard to say whether machines can ever exhibit better all round behavior than humans.  I've already said that I think it can be done, but we are a long way from accomplishing this. In the mean time, that doesn't mean we can't be working on making machines exhibit better behavior than they presently do -- continuing to press into the realm of functional morality, in other words.

The replicant Roy from Blade Runner
4. Originally Themester planned to show Blade Runner (but couldn’t for legal reasons). Are we grappling with any of the moral issues in that film today? Will we ever?

The level of technology in Blade Runner is way beyond anything we currently have so I don't think we are grappling in a serious way with those moral issues at the moment (although science fiction writers and certain philosophers will continue to do so in a speculative way).  I think that perhaps there is too much attention on the far future technologies, however.  It distracts us from seriously considering the moral limitations of the semi-autonomous and autonomous machines that we are increasingly putting into service -- everything from call-answering systems to driverless cars.





5. What is your favorite fictional machine and why?

My favorite is Marvin the Paranoid Android from Douglas Adams' The Hitchhiker's Guide to the Galaxy.  I primarily like him, because he's funny: ‘"Reverse primary thrust, Marvin." That's what they say to me. "Open airlock number 3, Marvin." "Marvin, can you pick up that piece of paper?" Here I am, brain the size of a planet, and they ask me to pick up a piece of paper.’ His resignation to his fate of serving the much less intelligent biological life forms around him, coupled with his self-professed capacity to solve all of the major problems of the Universe "except his own" is a perfect encapsulation of why we build machines -- to serve us and for self-understanding, the latter a task for which there is no guarantee of success.


 
Marvin the Paranoid Android
6. Included in the description of your talk is the quote by Rosalind Picard of MIT, “The greater the freedom of a machine, the more it will need moral standards.”   What does that mean?
Like it or not, we more and more live among and interact with partially intelligent machines.  In fact, I just got off a phone call in which about half of what I said was said to a machine (and, I'll add, the machine was more accurate in taking down the tracking number that I spoke into the phone than was the person who eventually came on the line, for whom I had to repeat it). 

 Right now, these machines are ethically "blind" -- they don't even gather information that could be relevant to providing an appropriate response -- for instance about the level of urgency involved in tracking my package.  With a human operator one could explain whether the failed delivery was (or was not) causing a lot of unnecessary pain, and the operator could prioritize the request accordingly. The machine currently has no such capacity.  A dumb "operational morality" approach would be to allow people to rank the urgency of the request on a scale of 1 to 7.  But this would be dumb because the kind of information gathering is very limited, the measure is very crude, and it would be entirely up to the programmer to figure out what to do with, e.g., a 7 ranking vs. a 6. 

I think we are going to want to have machines that respond more flexibly to our needs by interacting with us and assessing the cues we provide in a more natural way than simply asking for a numerical ratings. And because these machines are operating in more and more open environments and have more and more options to select among (this is what Picard means by the freedom of the machine), the more they will need to have the real-time capacity to weigh various pieces of morally relevant information and act accordingly, rather than following some simple rule in which the programmer has tried to anticipate all the situations the machine will encounter.

For more information on Dr. Allen's talk, refer to http://saiu.org/2012/09/07/moral-machines-a-talk-by-colin-allen/.

Rebecca Kimberly
Themester 2012 Intern

No comments:

Post a Comment