The writers mentioned above are partly engaged in technical predictions about likely future progress in robots (defined in the above generic manner) and associated technologies. Very often their technical predictions are for an accelerating rate of technical development. De Garis, for example, employs Moore's law to justify an annual doubling in the rate of computer power (de Garis 1999 Chap.2) and later frankly claims (de Garis 1999 Ch.5): 'progress will be exponential'. Warwick is more circumspect, arguing for only the possibility of increasing rates of development (although his predictions for the year 2050 suggest that he too forecasts a vast increase in the rate of technical development compared with present-day rates of progress) (Warwick 1998 Ch.2 and Ch.12).
We cannot show that accelerating technical progress is impossible. (note 1). It is always in principle possible that a breakthrough of earth-shattering significance might be made tomorrow, allowing the sort of technical developments that these scenarios require.
A non-technical denial of the possibility of a robot takeover is provided by Perri 6 (Perri 6 1999 pp.93-96). Essentially his argument is that for autonomous artificially intelligent machines (which are included in our generic use of the word 'robot', and may, in fact, be co-extensive with it) to take over a number of conditions must be met. Prominent among these are that they must acquire a vast amount of real-world knowledge, the capacity for judgement, and the capacity for collection action. Perri 6 feels that the first requirement would be very expensive (in terms of time and energy) for the robots, with which we agree, but emphasize that this does not entail impossibility. Of the second two requirements Perri 6 argues that in order to acquire these properties the robots would have to become, as he puts it, 'machine persons'.
This is for a number of reasons, according to Perri 6. Judgement, he claims, involves 'analogical and lateral, rather than exclusively analytical and vertical reasoning methods' (Perri 6 op.cit.). The capacity for collective action by the robots, he feels, would similarly not be open to robots capable only of rational thought. Purely rational agents find co-operation more difficult than those with shared cultural values. Ultimately, on his account, the only way for robots to acquire those properties in addition to pure intelligence which might enable them to gain power over humans is to effectively become full participants in human society. This, he claims is the 'central incoherence of the myth of take-over'. Such a take-over would not be the sort of possibility we have considered under our first scenario. Indeed, it would not much resemble the other two scenarios, either. It would look much more like the acquistion of power within human society by the sorts of means and for the sorts of reasons that humans typically consider legitimate.
This is a powerful argument to which we will return. For present purposes, however, we claim that it is not strong enough to show the impossibility of a robot take-over. We do not share Perri 6's conviction that purely rational robots would be incapable of the sort of judgement or collective action that would be required to gain power over human beings (note 2). This does not mean, on the other hand, that we can accept Warwick's assertion that intelligence is a sufficient condition for achieving power. We claim merely that it is in principle possible that some sort of robot or similar collection of devices might one day be constructed which could achieve the sort of tyrannical domination over humans described in our first scenario.