October 5, 2012

Making Ethical Decisions During War Q&A


Osgood's interview with Toan in Vietnam, June 2010. Photo provided by Osgood.
Ron Osgood, a Professor from the Telecommunications Department, has invited Nguyen Duc Toan to speak about maintaining a moral code in the military. Here Osgood shares a little more about the speaker and how they met.

1. When did you first meet Nguyen Duc Toan? How did you learn his story?

In 2009 I began research for my current documentary project "The Vietnam War/American War: Stories from All Sides." Through a series of conversations with veterans I was introduced to an American Army veteran living in Ho Chi Minh City (Saigon). Doug Reese knew Toan and helped me make contact for an interview. During a trip to Vietnam in June 2010, I traveled to the small village where Toan lives for the interview.

2. What inspired you to bring Nguyen Duc Toan to campus?

Toan and I have kept in contact via email and through translation by his daughter Hai. He has always expressed an interest in telling more of his story and his desire to travel to America. In January 2012, during another trip, I visited Toan, and we discussed how I might help him in coming to the states. Toan is a gentleman and someone who did the right thing while in battle.

3. Do you know why Toan saved Kientzler’s life instead of taking it? Could you comment on that?

I think I answered this above, but I'll add that Toan is an honorable person and did what he knew was morally right.

4. What are some of the ways military duty and moral duty conflict? Is it possible for them to overlap?

This is a tough question to answer in a few sentences and would make for a great question at the presentation. Toan is a good example of overlapping moral and military duty. The question I always ask is - if you saw an enemy pilot parachuting near you after he had been responsible for bombing or firing rockets at you, what would you do?

For more information about this event, refer to http://themester.indiana.edu/events/toan.shtml.



Amber Hendricks

Themester 2012 intern

October 3, 2012

Moral Machines: Q&A with Colin Allen

Dr. Colin Allen, Provost Professor of Cognitive Science and History & Philosophy of Science and Director of the Indiana University Cognitive Science Program, will lecture on "Moral Machines" on Sunday, October 7. Here, he discusses some of the ideas surrounding machines and morality.


1. What kind of morality do we attribute to machines – if any? If there isn’t now, will there ever be?
Gort Robot Model 
courtesy of ‘Mr. T in DC’

The "we" in the question is rather broad -- I think there are many people who think that the whole idea of attributing any kind of morality to artificial machines is ludicrous. 

I have argued, however, that we can think of three levels of moral capacity that might be applied to machines. The first and lowest level is "operational" morality, which really means nothing more than that the machines we build embody the moral outlook of the people who designed and deployed them. For example, software already exists that offers advice to hospital administrators and physicians on which patients should receive scarce medical resources; insofar as this software takes some factors into account (e.g., factors predicting likelihood of survival) but not others (e.g. number of dependent children) then it embodies moral assumptions about the importance or unimportance of those factors for making morally responsible decisions about health care. 


The highest level is full human-equivalent moral capacity, and this is a holy grail of artificial intelligence research. I tend to think that this is achievable in the long run, but currently that is so far away that it's in the realm of science fiction. However, in between these two there's an intermediate level that I call "functional morality" in which the system is designed to evaluate actions according to their moral consequences. There are currently a few prototypes for such systems, but nothing in production. However, there are applications being developed for medical care and supervision of elderly patients and in military robotics where systems might be required to evaluate conformity with the Geneva conventions and legal aspects pertaining to combat operations. Will such systems ever be deployed?  I tend to think that functional morality for eldercare systems is more likely than functional morality in systems designed for robotic warfare, but I would not want to bet a lot of money on that.


2. What interests you about morality and the relationship between humans and machines?

Morality is one of those things that people have held up as distinguishing us from other animals. That premise is obvious to some and doubtful to others, but the point is that there's a lot of of interest in just what morality is and how we come to display it because it seems so central to human nature. Some people like to point out that humans are more immoral than moral, that only reinforces the point; to be capable of acting immorally one has to be also capable of acting morally.  We don't think of the shark that attacks a surfer as acting either morally or immorally -- it is amoral. But immoral actions by humans only deserve that label because of the capacity for morality.

Machines add an interesting twist into the fundamentally philosophical question of who and what we are. The challenge of creating artificial moral agents is ultimately about understanding what makes us human.  And whether the project of creating artificial morality succeeds, or whether it fails, it is potentially instructive about the cognitive and emotional processes that drive our own moral behavior.


3. Can machines ever exhibit “better” behavior than humans?

Machines are in principle less distractable, less susceptible to emotion-driven loss of control, and more capable of carrying out longer chains of reasoning or calculation than humans. I think that for these kinds of reasons, machines could do better than humans at foreseeing bad consequences of certain actions, and thus avoiding them.  In these cases, they would exhibit "better" behavior. However, there's more to morality than this, and the machines we have currently lack the kind of powerful perceptual systems and complex pattern recognition of humans, and because they lack proper emotions, they lack the good side of emotional attachment as much as they lack the bad side that can lead to atrocious behavior. Some scientists have argued that since emotions are a net negative influence on our moral behavior, machines will be better than humans (at least in high-stress situations such as battlefields) because they won't be subject to those negative influences.  However, I think that it's far from obvious that emotions are a net negative for morality, and so until we have more convincing models of how emotion and cognition interact in moral behavior it's hard to say whether machines can ever exhibit better all round behavior than humans.  I've already said that I think it can be done, but we are a long way from accomplishing this. In the mean time, that doesn't mean we can't be working on making machines exhibit better behavior than they presently do -- continuing to press into the realm of functional morality, in other words.

The replicant Roy from Blade Runner
4. Originally Themester planned to show Blade Runner (but couldn’t for legal reasons). Are we grappling with any of the moral issues in that film today? Will we ever?

The level of technology in Blade Runner is way beyond anything we currently have so I don't think we are grappling in a serious way with those moral issues at the moment (although science fiction writers and certain philosophers will continue to do so in a speculative way).  I think that perhaps there is too much attention on the far future technologies, however.  It distracts us from seriously considering the moral limitations of the semi-autonomous and autonomous machines that we are increasingly putting into service -- everything from call-answering systems to driverless cars.





5. What is your favorite fictional machine and why?

My favorite is Marvin the Paranoid Android from Douglas Adams' The Hitchhiker's Guide to the Galaxy.  I primarily like him, because he's funny: ‘"Reverse primary thrust, Marvin." That's what they say to me. "Open airlock number 3, Marvin." "Marvin, can you pick up that piece of paper?" Here I am, brain the size of a planet, and they ask me to pick up a piece of paper.’ His resignation to his fate of serving the much less intelligent biological life forms around him, coupled with his self-professed capacity to solve all of the major problems of the Universe "except his own" is a perfect encapsulation of why we build machines -- to serve us and for self-understanding, the latter a task for which there is no guarantee of success.


 
Marvin the Paranoid Android
6. Included in the description of your talk is the quote by Rosalind Picard of MIT, “The greater the freedom of a machine, the more it will need moral standards.”   What does that mean?
Like it or not, we more and more live among and interact with partially intelligent machines.  In fact, I just got off a phone call in which about half of what I said was said to a machine (and, I'll add, the machine was more accurate in taking down the tracking number that I spoke into the phone than was the person who eventually came on the line, for whom I had to repeat it). 

 Right now, these machines are ethically "blind" -- they don't even gather information that could be relevant to providing an appropriate response -- for instance about the level of urgency involved in tracking my package.  With a human operator one could explain whether the failed delivery was (or was not) causing a lot of unnecessary pain, and the operator could prioritize the request accordingly. The machine currently has no such capacity.  A dumb "operational morality" approach would be to allow people to rank the urgency of the request on a scale of 1 to 7.  But this would be dumb because the kind of information gathering is very limited, the measure is very crude, and it would be entirely up to the programmer to figure out what to do with, e.g., a 7 ranking vs. a 6. 

I think we are going to want to have machines that respond more flexibly to our needs by interacting with us and assessing the cues we provide in a more natural way than simply asking for a numerical ratings. And because these machines are operating in more and more open environments and have more and more options to select among (this is what Picard means by the freedom of the machine), the more they will need to have the real-time capacity to weigh various pieces of morally relevant information and act accordingly, rather than following some simple rule in which the programmer has tried to anticipate all the situations the machine will encounter.

For more information on Dr. Allen's talk, refer to http://saiu.org/2012/09/07/moral-machines-a-talk-by-colin-allen/.

Rebecca Kimberly
Themester 2012 Intern

October 2, 2012

Chaz Bono: Q&A with Martin Weinberg and Jennifer Bass

Professor Martin Weinberg of the Department of Sociology and Jennifer Bass, the Communications Director for the Kinsey Institute, worked to bring Chaz Bono to campus. Bono will speak at IU Auditorium on Wednesday, October 4 at 8:00 pm. Here they discuss the relevance of the lecture and what it should teach its audience members.

1. How does Chaz Bono's talk relate to good and bad behavior?

WEINBERG: We see hateful folk who BEHAVE BADLY—who seek to humiliate sexual and gender minorities, physically harm them—even kill them. (And this is the tragic story portrayed in the film, “Boys Don’t Cry,” to be shown at the IU Cinema on Oct. 15.) We also see the GOOD BEHAVIOR of those who support these minorities in their quest for acceptance. As to the “in general” part of your question, the same is true regardless of what minority is being considered—not just sex and gender minorities. When people are just being the people they are and not victimizing others, why should they be victimized?


BASS: This is not just about the behavior of one person who transitions from female to male, but about how society reacts to this personal decision. People who are transgendered suffer from discrimination, physical and sexual abuse; in this case, we are interested in the behavior of those who are not transgendered, and why trans individuals are targets of these negative behaviors.

2. Why was Chaz Bono chosen to address this subject matter? Was anyone else considered?

WEINBERG: Chaz was my first and only thought as a person to bring to IU. When it was announced that he would appear on Dancing with the Stars, all kinds of threats were being directed toward him (including death threats) as well as the TV network! I thought: wow, this is a natural for Themester!

BASS: As a public person, and a celebrity, Chaz can attract an audience who otherwise might shy away from discussions on gender identity and transgender issues. Though he is just one person, we hope that Chaz’s story will interest a wide range of students, who otherwise would not be tempted to come out for a lecture by an LGBT activist. Chaz Bono has become a very prominent spokesperson for transgender rights.

3. What do you hope audience members will learn from the discussion with Chaz Bono?

WEINBERG: I hope they will feel enriched by his story. I hope it resonates with them. I hope it gets them to understand that people who are different from those in modal groups can still be good people. And, finally, I hope it motivates them to support the cause of groups who are so “counted out.”

BASS: We hope that Chaz’s lecture will spark discussion on issues of gender identity, and understanding and compassion for those who struggle with feelings of being disconnected from their biological bodies.



Amber Hendricks
Themester 2012 intern

September 28, 2012

When the Rain Stops Falling: Sins of the Fathers ...


Courtesy of IU Theatre
When the Rain Stops Falling, a new play written by Andrew Bovell, follows a family’s history over a span of 80 years and four generations. The play jumps around in time, from England to Australia and back. Sometimes characters of the past and future share the same stage. Sometimes it rains on stage. Literally.

Murray McGibbon, an Associate Professor from the Department of Theatre & Drama, recently took on the compelling project. He hoped to “foster a growing understanding of the enormous power of theatre” with a moving play that “pushes the envelope.” “It takes only two hours to explore eighty years in this play,” he said. “That wouldn’t work in another medium.”

McGibbon’s key mission in directing this piece was “to find the heart of a deeply passionate play.” In a play that exhibits excessively bad behavior, the heart is still very much present. Characters are shown to love others in spite of their misdeeds, to forgive, to find cruel ways of being kind.

When the Rain Stops Falling is a remarkable addition to Themester in that it explores how the bad behavior of one generation can shape the decisions of the following generations. McGibbon’s read it in the context of a quote from Exodus:

“Yet [God] does not leave the guilty unpunished; He punishes the children and their children for the sin of the parents to the third and fourth generation.”

In this play, the wrongs committed by the first generation blight the fates of all until the fourth generation, when the rain finally stops, when the good-natured Andrew forgives his father. The family’s punishment for its original dark secret finally ends.

McGibbon noted that even if the audience did not sympathize or approve of a character’s behavior, each character was extremely compelling and exhilarating to create. He loved that the play offered the audience a chance to come to their own conclusions, to fill in the holes.

The audience is challenged by the decisions each character makes. Viewers are forced to reckon with their own moral readings of these decisions and characters. Some may consider helping someone commit suicide reprehensible and unthinkable. Others might see it as an act of kindness to a deeply disturbed and unhappy individual. Some might consider maintaining a distance with one’s child as cruel and incapacitating. Others might see its necessity, especially if a dark secret could corrupt that child’s view of the world and their own identity forever. Incredibly difficult choices are made in this play, and they do much to contest moral behavior.

Scenes from "When the Rain Stops Falling," courtesy of IU Theatre.
 “This is not a preachy play,” said McGibbon. “If anything, it teaches us that we are products of the decisions made by our ancestors.” The most important thing an audience can take away from this is the lasting effect of both good and bad behavior and how far down the line our decisions reach. When the Rain Stops Falling is an extraordinary play, and to McGibbon: “One of the best I’ve ever encountered.”

It runs again this weekend:
Thursday, September 27 @ 7:30 PM
Friday, September 28 @ 7:30 PM
Saturday, September 29 @ 2:00 PM and 7:30 PM



Amber Hendricks
Themester 2012 intern

September 26, 2012

Swept Away by Language

In this blog post, Ivan Kreilkamp, an Associate Professor for the Department of English, discusses ways in which language can exhibit good and bad behavior and presents conflicting and evolving ideas about how language should behave.

Friedrich Nietzsche and mustache.
In English L371 this semester, “Introduction to Criticism and Theory: Original and Copy,” we’ve been considering why and how language – especially literary language -- has been considered to misbehave, turn bad, or mislead us. In Book X of hisRepublic, Plato (via Socrates) explains why poetry and other artistic representations can become so dangerous to the state. A poet “establishes a bad system of government in people’s minds by gratifying their irrational side;” poetic representations are at best, “a kind of game,” diverting but deceptive, far from the truth, and an indulgence of our worst natures. “We surrender ourselves” to poetry, which sweeps us away and casts a kind of enchanting “spell,” but this is dangerous sorcery against which we must protect ourselves. Good, rational language leads us to reason and the truth, and away from the imagination.

Few can match Plato for full-throated denunciation of the tendency of language to “go bad.” His student Aristotle was more receptive to poetry or other imaginative literature, which, he argued in his Poetics, can ideally lead to emotional “catharsis” in an audience and promote an understanding of the “universals” of experience. A great poet may “tell untruths,” but “in the right way,” such that this language will seem “plausible,” “natural,” and “probable” as it tells of “terrifying and pitiable events” and promotes an understanding of heroic action. For Aristotle, poetry and made-up stories can easily go bad when they seem implausible, based on “contrivance” rather than “necessity.” But poetry also can offer access to “what is universal” and virtuous in human experience.

From these classical origins, we have been considering the various different ways 19th and 20th-century theorists, critics, and philosophers worry about language “going bad.” Such thinkers as Oscar Wilde, Charles Baudelaire, and Friedrich Nietzsche, for example, invert and question many of the principles Plato and Aristotle laid out. Plato’s dream of rational, sober language that will contribute to a sound and well-ordered community becomes Nietzsche’s nightmare. For Nietzsche, the Platonic “Man of Reason” has in effect sold his soul, or at least his creative spirit, in exchange for the lie of rationality and “truth.” “As creatures of reason, human beings… no longer tolerate being swept away by sudden impressions and sensuous perceptions.” We implicitly agree to “use the customary metaphors” and to render our language abstract, conventional, and “dull-spirited.” Instead, Nietzsche urges us to become intuitive, metaphorical, “richer, more luxuriant, more proud, skillful, and bold” in our uses of language. The “Man of Intuition” “jumbles up metaphors and shifts the boundary stones of abstraction, describing a river, for example, as a moving road.”  In creative metaphor, human beings escape the “mark of servitude” and become creative, swept away by language.



Ivan Kreilkamp
Associate Professor
Department of English