Next:NotesUp:How to Avoid aPrevious:How To Avoid A
 

A metaphor

One might be inclined to think of the lead-up to a robot takeover as something akin to a game of chess. As an analogy to illustrate the predictions of many of the distopians in the area it has two things to commend it: it is simple, and it is wrong. There is certainly something alluring about the idea that it is a 'them and us' , black and white situation and that there will be a single, conclusive result - but this is wrong. It would be comforting to think that it was a game that, somehow, we could win and then carry on with our real lives - but this is wrong. And it is worrying to reflect on the fact that chess programs represent some of the most stunning success stories in the recent history of AI, and that therefore we would inevitably be taken over - but this is wrong.

The claims that a takeover is inevitable are claims that the Robots have the tempi and that a combination of their intelligence and our supposed weaknesses means that it will never be regained.

What are the important and unimportant aspects of the chess analogy? Here, of course, the notion of the intentional stance (Dennett 1987) looms large. Whilst knowing the underlying mechanisms may give us some advantage, no particular mechanism seems to be necessary for a takeover. Indeed, it is likely that any takeover would require agents using a hybrid collection of approaches - diversity is strength. So we can imagine a scenario of a central intelligence (perhaps resident on the Internet) and an army of less intelligent robots to do the dirty work. That we think of the behaviour as intentional because of prima facie evidence is a psychological response (Bryson and Kime 1998). It is a response we should be mindful of so that it will not cloud our judgement. The important aspects of the computer chess analogy are its formal aspects. The less the world and our actions in it miror the formal nature of the chess game, the likely any takeover scenario becomes.

So how would this mate be achieved? Perhaps a simple, steady build up of their forces; slowly gaining the dominant position in all parts of the board and our eventual resignation. Or perhaps something more spectacular. It seems clear that a robot passing the Turing test will require the ability for subterfuge; so maybe we will see something like a double, discovered mate.

In deference to the long tradition of philosopher chessmasters and chessmaster philosophers we should note the danger of Philador's Legacy (smothered mate). Could we be beaten by an opponent that takes advantage of our own heavy defences? What scenario fits this chess classic? A missed opportunity because research was curtailed in the face of unreasonable fears about the consequences of AI research?

There are some scenarios which the chess model will not easily accomodate. Does the scenario which we have described as the case of co-evolution fit it? Well, yes and no. Maybe it is not a case of black and white, but how about a model where Yoko decides that white was the best choice after all? In a game like that you would have to ask questions. Who's playing who? How does mate happen? Where's the takeover? Whilst these questions only occur in the Yoko model much still remains from the original chess analogy - the closed world requirement, the prescribed rules of movement, the requirement of a motivation to play. All of these requirements diverge from reality in ways which cast doubt on the inevitability of a takeover, and consideration of the divergences lead us to the prescriptions for avoidance: don't connect to the world without good reason and confidence; don't allow winning the game to come down to a particular type of intelligence; and don't give them the need or desire to actually start playing. We play against ourselves - how can we be taken over?



Next:NotesUp:How to Avoid aPrevious:How To Avoid A
 
 
Blay Whitby

2000-03-28