Can humans build robots that will be more moral than humans?

Selmer Bringsjord answers
 

Can humans build robots that will be more moral than humans?
No; positively no. Here's why.

The question presupposes a way to measure a creature's position on a continuum of degrees of moral performance. But no rigorous and received version of such a continuum is in the literature. Hence, I'll here use an informal version of my own.

At the maximal end (moral perfection) a creature C infallibly meets all its obligations, and in addition carries out (relative to C's power) those supererogatory actions that are maximally beneficial. At the other end would be a thoroughly evil creature: one who fails to meet all substantive obligations, and goes out of its way to carry out actions that are (relative to its level of power) maximally detrimental.

Creatures that are at once sentient, intelligent, free, and creative (SIFC) are, if you will, "make or break." That is, they have the potential to reach high on the continuum—but can also fall very, very low on it. In contrast, creatures that lack one or more of SIFC necessarily fall somewhere near the midpoint: They can't be morally great, but they can't be Satanic either.

Now to robots, present and future. They fall near the midpoint, and can't move anywhere else. They can't possibly reach moral greatness; we can. Why? It's simple:

Computing machines aren't conscious (there's nothing it's like to be a robot; they are in this regard no different than, say, slabs of granite), and consciousness is a requirement for moral performance at the level of a human person. In other words, robots lack the S in the SIFC quartet. Without sentience, they can't, for example, empathize; hence, they can't understand one of the main catalysts of the sort of supererogatory actions constitutive of moral greatness. Jones may spontaneously compose a note to Smith not because Jones is obligated to do so and believes that he is, but rather because he feels Smith's sorrow, and seeks to apply epistolary salve.

In addition (and this relates to the I and C in the SIFC quartet, a pair that, relative to humans, is at least compromised in the case of robots), moral greatness entails having a capacity to solve difficult moral dilemmas. But such dilemmas can be as complicated as higher mathematics, perhaps more so. Robots won't ever have the intellectual firepower needed for truly demanding math. Ergo, the moral performance of robots will forever be below the moral reach of human persons.

It's important to note that some variants of our question are trivial, because it's trivial to prove that an answer to them is correct. (I'm indebted to Alexander Bringsjord for stimulating my coverage of this point.) I steered clear of considering, for instance, this trivial question:

Q1: Can humans build some robots that will be more moral than some
humans?

Given the continuum adumbrated above, it's easy to prove that the answer to this question is yes. But no one should be aiming to build such morally mediocre robots.

And here is a variant of the original question that is very, very important:

Q2: Can we engineer robots that meet all of their obligations?

The answer to this one is yes, and this is the question-answer pair that I see myself working toward demonstrating, with crucial help from Bertram Malle, and others.

A final point: Obviously, I interpreted the question in such a way that it's logically equivalent to:

Q': Can humans build some robots that will be more moral than all humans?

which is in turn equivalent to:

Q'': Can humans build some robots that will be more moral than the overall class (or capacity) of humans?

The answer to Q' and Q'', again, for reasons given, is firmly in the negative.

See Bertram Malle's answer →

Selmer Bringsjord is a professor and chair of cognitive science, a professor of computer science, a professor of logic & philosophy, a professor of management & technology, and director of the Rensselaer AI & Reasoning Laboratory at Rensselaer Polytechnic Institute.