Can humans build robots that will be more moral than humans?

Bertram Malle answers


To answer this question, we need to establish what it means to be “less” or “more” moral. People typically think that being moral is about moral action—that is, action in accordance with moral norms. However, it won’t make for a satisfactorily moral robot if it only abides by moral norms. Matthias Scheutz and I have developed a model of moral competence that goes beyond moral action and involves five competences: having a moral vocabulary, representing moral norms, acting in accordance with such norms, making judgments about behaviors that violate (or exceed) those norms, and communicating about one’s own or others’ morally significant behaviors. 

Some of these competences are very difficult to implement in a robot—not just because of their demand on moral cognition and decision making but because of their demand on language, thought, and social-cognitive capacities. Right now, I don’t see any way that robots could even come close to an adult who is reasonably competent in all five of these elements. Nonetheless, we hope to make progress to design and teach robots to be morally competent in at least some of these elements.

For example, we are currently developing a minimal vocabulary (of a few hundred words) that a robot would need to master in order to recognize and properly parse “moral talk”; this would be useful not just for grounding full-fledged moral communication but also for searching large narrative databases to learn about typical contexts and use of moral language. In our ongoing research, we also try to identify the fundamental cognitive properties of human norm systems and hope to implement them in a computational architecture. Huge challenges await any such architecture, if it is to represent the thousands (perhaps hundreds of thousands) of human norms that are interconnected in hierarchical and context-dependent ways.

However, if we limited a robot’s domain of application (e.g., to take care of one elderly person in that person’s apartment), and if we could therefore computationally represent a small subset of the highly complex human norm system, such a robot might be considered “more moral” than humans in one narrow way. This robot would, by design, not be selfish and not be irrational—two well-known obstacles to human moral action—and would show no weakness of will due to boredom, annoyance, or temptation. It would thus be more reliably moral than most humans might, but it would lack many other moral capacities. These capacities are likely to remain out of reach for even the most sophisticated machines unless they are raised in human communities and get the time and opportunity to actually learn all elements of moral competence. For that is the only way humans can acquire moral competence.

See Selmer Bringsjord's answer →

Bertram Malle is a professor in the Department of Cognitive, Linguistic, and Psychological Sciences at Brown University.