Next:A metaphorUp:How to Avoid aPrevious:The supposed inevitability of

How To Avoid A Robot Takeover

What we have claimed so far is that a takeover is not inevitable, but that it is possible. It is also our contention that a takeover is not probable. But it does not follow from this that we can afford to be complacent.

Think of it, if you like, as an exercise in theoretical crime prevention - how do we avoid generating the motive, means and opportunity for a takeover? Some methods are obvious and are already in the arena of debate. These include the various failsafe mechanisms we might implement, buddy systems, ethical systems programming, and perhaps most importantly, humans as final arbiters in decision making. If these options are overlooked then a takeover would be that much more likely. We see no reason why these systems will not be implemented given the preponderence of what has been styled: 'the right stuff' (Whitby 1988) in those in power and to a lesser extent in the population as a whole.

But given that these systems will not be infallible other factors must be considered. Chief amongst these is the question of ubiquity. We resist the temptation to make ubiquity a necessary condition of takeover. We do suggest, however, that ubiquity of intelligent artefacts or the effects of them greatly enhance the probability of takeover. One way that ubiquity could arise would be by the usurping of existing structures. Hence, governmental edicts requiring all citizens to have particular relationships to technology must be closely scrutinized (note 4). On the personal level, we would suggest (perhaps a little mischieviously) that one of the best ways to ensure that technology is always with you is to have it implanted in the manner of Kevin Warwick.

All of the above points to clear choices to be made by us as individuals and societies. We believe that these choices will be made and a takeover will be avoided. What is needed is determined reflection by engineers, the commercial sector and government on the possible ramifications of technology (but this is hardly a new idea). None of the above reduces the need for AI and ALife researchers to conform to the highest ethical standards in their work, and to encourage public scrutiny of both their work and the its underlying social and political assumptions. What is needed most of all is reasoned public debate. Warwick, de Garis and Moravec are to be congratulated for sparking public debate. It is perhaps time now for a little more reason.



Next:A metaphorUp:How to Avoid aPrevious:The supposed inevitability of
 
 
Blay Whitby

2000-03-28