Next:The possibility of robotUp:How to Avoid aPrevious:How to Avoid a
 

Introduction

Many writers have raised the possibility of intelligent artifacts acquiring domination over humanity. Recently this has become more frequent, not in science fiction, but in the work of technologists and commentators writing from a non-fictional perspective. Prominent examples include: Kevin Warwick (Warwick 1998) and Hugo de Garis (de Garis 1999). A rather more positive spin is put on similar technical predictions by Hans Moravec (Moravec 1988). This paper asserts two main theses. The first is that such predictions are not obviously misguided or incoherent. The second thesis is that they are wrong. The paper is intended as a contribution towards cool and balanced debate on these issues and any policy implications which might follow.

A preliminary conceptual clarification is required on the question of just what sort of domination might be involved in such a scenario. We can distinguish three main types of possibility. First, there is the situation in which robots (by which we mean any type of intelligent artifact or non-natural autonomous agent) come to directly exert a tyrannical form of power over human beings. This is the situation described by Warwick (Warwick 1998 pp. 21-26) and the more usual sense of the work 'domination'. A similar non-consensual form of tyrannical power seems to be what de Garis means by 'species dominance' (de Garis 1999).

A second possible future situation might be described as 'cultural reliance'. In this situation humans somehow allow a position of dependency to develop. One motivation for this possibilty might that of seeking what has been called the 'warm electric blanket' (Whitby 1988 p.18). In this case humans place more empahsis on their desire for a comfortable existence than their desire directly to control technology. It is conceivable that this could eventually lead to a situation in which humans more or less willingly surrender power to some form of intelligent autonomous artifact.

A third possibility is that of co-evolution. This is a complex notion which we will not have time to explore here. In this situation, humans and robots have both evolved in ways that either increase human dependence or robot usefulness to the point where one might talk of domination by the robots. It is important to note that in this situation there may well be substantial changes in human beings and in their relationship with technology, so that in the more extreme cases prediction becomes worthless. In the less extreme cases co-evolution tends to look more like one or other of the first two possible future scenarios.

In this paper we will argue firstly that these three scenarios are not absurd, nor obviously self-contradictory. However, we also wish to claim that the tyrannical domination involved in the first scenario is neither probable nor inevitable. Importantly, we claim that humans would have to make (and enforce) a relatively large number of clearly mistaken choices for any of the above three scenarios to develop. Primarily for reasons of space we will concentrate on the first scenario - that in which writers argue for the inevitable emergence of a tyrannical form of domination by robots. This is also the scenario enviaged by the writers we wish to oppose. In addition, a demonstration that a robot takeover can be avoided in this scenario will very strongly suggest that future humans can make choices which will avoid such a takeover in the other two scenarios.

There is a distinction to be made here between 'takeover' and 'over-take'. It is important, although obvious, to see that the two are not co-extensive. That an over-taking on some or all dimensions is possible is not called into question by this paper. But if the worry is that over-taking will necessarily lead to a takeover then that worry is unfounded. There is no reason to suppose this to be the case. For instance an advanced artefact culture could develop which could absent itself from contact with humans: over-taking with no takeover. Worries of this kind are egotistical worries about our own superiority and should not concern us here.

It is important to set out clearly and coherently why predictions of a robot takeover are unwise. This is for at least two distinct reasons. First such predictions have an appeal to journalists and are too often reported, often in a sensational manner. This may well lead to a very distorted public image of the state of art in and potential dangers of fields such as Artificial Intelligence, ALife and robotics. Second, these dangers have been used as an argument in favour of legal restrictions upon or prohibition of Artificial Intelligence research. If unwarranted limitations are to be avoided, it seems that the contrary case must be argued in a calm and deliberate manner. This is what is attempted here.



Next:The possibility of robotUp:How to Avoid aPrevious:How to Avoid a
Blay Whitby

2000-03-28