Robot Minds and Human Ethics
Wendell Wallach
Yale University's Interdisciplinary Center for Bioethics
February 5, 2008
5:30pm-7pm
Social Sciences 016
Abstract
Is it possible to design software agents and robots capable of making moral judgments - Artificial Moral Agents (AMAs)? As the autonomy of artificial agents increases, the challenge of ensuring that they will not cause harm to humans becomes far more complex than the safety concerns engineers commonly address. Can we implement moral theories such as utilitarianism, Kant's categorical imperative, Aristotle's virtues, the Golden Rule, or even Asimov's laws for robots in computational systems? Which bottom-up strategies (genetic algorithms, learning algorithms, etc.) might facilitate the development of software agents with moral acumen? Does moral judgment require consciousness, a sense of self, emotions, social skills, an understanding of the semantic content of symbols and language, or that a system be embodied in the world? Designing artificial systems sensitive to moral considerations forces us to think deeply about human decision making and ethics, and the ways in which we humans may differ from the artificial entities we will create.