2) There is much debate on the issues that Perri 6 takes as read here. Current experiments in the areas of computer programs which simulate moral agency and the prisoners' dilemma suggest a strong possibility (if nothing more) that certain types of moral behaviour are entirely rational and could therefore be programmed. Similarly research into co-operative behaviour between autonomous computational agents suggests that it may be possible to develop such agents towards acquiring the capabilities that are required here. Some relevant work is discussed in Danielson (1992)
3) Of course, presenting 'intelligence' as a uni-dimensional scale on which human and machine intelligence can be directly compared greatly simplifies the arguments we wish to criticise here. We believe there is no scientific reason to view intelligence as a single dimension on which humans and other entities can be directly compared.
4) When this paper was originally presented concern was raised about the importance of ubiquity. The claim was made that we already live with ubiquitous technology and that no takeover seems imminent. The example of motors is a good one. Most people would be surprised at the number of motors in their own homes. We agree that ubiquity in and of itself is not a cause for concern but that the technology we discuss here is such that ubiquity does become a concern. If there are technological artifacts that exhibit a software/hardware distinction and can communicate with other artifacts then we have mechanisms that could turn a functional creep into a functional gallop. It is in scenarios such as these that curtailing ubiquity might be a necessary precaution.