Computational Systems, Responsibility and Moral Sensibility

Henry S. Thompson

HCRC Language Technology Group
Division of Informatics
University of Edinburgh
Scotland, U.K.

ht@cogsci.ed.ac.uk


1. Computers and morality

We can identify three areas of interaction between our understanding of computer systems and moral and spiritual issues:

1. The moral and technical issues involved in empowering computer systems in contexts with significant impact, direct or indirect, on human well-being;

2. The scientific/technical questions in the way of introducing an explicit moral sensibility into computer systems;

3. The theological insights to be gained from a consideration of decision-making in existing and envisagable computers.

We can make this concrete by reference to the parable of the Good Samaritan, if we imagine the innkeeper fetched a barefoot doctor for the injured man who consulted a medical expert system via a satellite up-link, that the robbers were caught and brought before an automated justice machine, that the Samaritan was in fact a robot and finally that Paul himself rethought the significance of the parable on the basis of this reformulation.

1.1. Empowering computer systems

The barefoot doctor who consults the medical expert system and follows its recommendations, perhaps without understanding in detail either the tests it calls on her to perform or the remedial actions it then prescribes, raises very pressing issues of responsibility and empowerment. Who is responsible for the actions of computer systems when these have significant potential impact on human life or well-being?

We have a much clearer understanding of the empowerment question with regard to people (doctors, teachers, even coach drivers) or machines whose impact is more obviously mechanical (ships, airplanes, even lifts or electric plugs). In the first case, we impose both a particular training regime and a certification process before we empower people to act in these capacities, often backing this up with regular re-assessment. In the case of machines, training is inappropriate, but testing and certification to explicit standards are typically required by law and expected by consumers.

But to date very little regulation is in place for the soft components of computer systems. If the Samaritan were to die unnecessarily while under the care of the barefoot doctor, and his family sought redress through the courts, no explicit law in Britain or America would cover the issues raised by the role of the expert system, and the few available precedents would suggest only a lengthy exercise in buck-passing between the operator of the system, the manufacturers of the computer hardware on which it ran, the designers of the software and the programming firm that implemented it under contract. Without prejudice to the larger issues under consideration, there is no question that some serious steps should be taken to bring software within the purview of official regulatory procedures.

1.2. Responsibility as such

In the eventuality under discussion, with today's technology, there would be no suggestion that liability might lie with the computer system itself, as such. Computer systems are not legally persons, and our naive understanding of their operation is sufficient to render attributions of legal responsibility inappropriate. The kinds of technical issues which might arise in the hypothecated dispute might include the in-principle limits on software and hardware verification, but would presumably not extend to questions of self-consciousness and autonomy, much less to the system's awareness of the difference between right and wrong.

But if we move on to the second of our imaginery modifications to the parable, when the robbers are brought up before a mechanical magistrate, then these are precisely the issues which will arise.

Before examining this in detail, it is worth reviewing a fictional encounter with these issues.

2. Asimov's Three Laws of Robotics

The practical consequences of attempting to establish an artificial moral sensibility have received extensive consideration in Isaac Asimov's famous science fiction stories, written over a ten-year period between 1940 and 1950, about the deployment into society of "positronic robots", whose moral compass is provided by three built-in laws:

1. "A robot may not injure a human being, or, through inaction allow a human being to come to harm.

2. "A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. "A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."

In the stories, these laws are clearly identified as a necessary and sufficient guarantee of good behaviour, and interestingly enough given our latter-day skepticism concerning the reliability of computer systems, the manufacturer's ability to correclty and reliably install them in their products is not doubted to any significant extent.

There's actually very little discussion of the moral significance of the Three Laws in the stories, most of which are a form of detective story, in which the mystery is apparently aberrant robot behaviour, and the resolution is an explanation for the behaviour in terms of exigesis of the playing out of the tension between the laws and their clauses in unanticipated ways.

It's worth noting in this connection that Asimov nowhere introduces or depends on a notion of reward and punishment, or of learning, with regard to what he refers to as the ethical aspect of his robots. It's not that they know they shouldn't harm humans, or that they fear punishment if they do, but that they can't harm humans. The non-availability of this aspect of their `thought' to introspection or willed modification reveals the fundamental incoherence of Asimov's construction: we must not only posit a robotic subconscious, constantly engaged in analysing every situation for (impending) threats to the Three Laws, but we must also accord complete autonomy to this subconscious. It's not clear how any such robot could operate in practice, never knowing when its planning might contingently fall foul of a subconcious override.

3. Mechanical magistrates and the role of responsibility

Setting the question of moral calculus to one side for a moment, I want to identify another issue which is relevant to the empowerment of artifacts to perform tasks with significant human impact: The role of self-consciousness, particularly consciousness of ones own responsibility, in fitting an individual for such tasks. Introspection suggests that this aspect of humanity is fundamental to our willingness to accept judgement at the hands of others. We have some more or less well articulated understanding of the tension between the ideal of the rule of law, and the reality of the need for interpretation and qualification by human beings. Our willingness to accept the latter, at least in moderation, depends in turn on our recognition of the fact that the judge not only is responsible for the judgement, but that also s/he takes responsibility for it, and that implicit in this is the notion that the implications of taking responsibility are a factor in the judgement itself. To understand just what this means, a brief diversion into philology is in order.

4. Passion

The word `dispassionate' might be thought of as describing exactly the intrinsic property of a mechanical magistrate which would make it so well suited to its job. The quote above about what would make a robot an ideal civil executive is clearly appealing to this. But for our purposes, the opposite of `dispassionate' is not `passionate', but rather `compassionate'. It's not that we need or want random gusts of emotionally fuelled prejudice, but that we depend on a fundamental recognition of the joint humanity of judge and judged. It is after all precisely this, the claim on care arising from common humanity, which the parable of the Samaritan is all about. In the literal sense such commonality can never include both protoplasmic and mechanical intelligences, but can we imagine any other basis for com-passion between human and machine, because it seems to me that compassion is constitutive of moral sensibility. If this is right, then it all comes down to the question of community: the way we derive our identity from our membership in overlapping hierarchies of groups.

5. Virtues, practice, community and embodiment

In After Virtue, MacIntyre attempts to re-establish the Aristotelian notion of virtue at the heart of morality and moral philosophy. In the course of so doing, he appeals to individual and social practice as the locus of the definition of the good, in terms of which in turn virtue is to be understood. This immediately raises questions for any approach to computational morality, as it suggests there can be no such thing without (embodied?) participation in communities of practice at many levels.

And this seems to me to be a pretty nearly fatal circularity: we allow children such co-participation as part of their acculturation process, as a means of embuing them with a moral sensibility (or alternatively of stimulating/awakening a God-given disposition thereto), precisely because we have the most personal possible evidence that they are capable of moral agency - we know we were once like them, and we managed it. What evidence would it take to convince us that constructed artefacts, as opposed to flesh of our flesh, should be allowed that opportunity?

6. Towards a computational theology

Just as (in my view) cognitive science is not a subject matter, but a methodology for enquiry in a range of the human sciences such as linguistics and psychology, just so computational theology should not be understood as an alternative to, say, process theology or liberation theology. Rather it would be a component form of theological enquiry, an addition to the methodological inventory of investigation of theological issues. In that sense the whole of the preceding discussion has been a preliminary attempt at computational theology.

Two examples, one brief and the other even briefer, do not in themselves constitute the foundation of a new theological methodology, but I hope they lend at least an initial plausibility to the case for one. If so, then not only may the idea be carried forward by professionals from the two contributing disciplines, but also the invitation to amateur theologising via the science fiction perspective may be no bad thing for society at large.

7. References

    1. Asimov, Isaac, 1950. I, Robot, Putnam, New York.

    2. MacIntyre, Alasdair, 1985. After Virtue, second edition, ISBN
       0715616633, Duckworth, London.

    3. [Previous paper (1985) on this subject by the author]