Margaret A. Boden
What science tells us about human autonomy is practically important, because it affects the way that ordinary people see themselves. Denials of one's ability for self-control are experienced as threatening. The sciences of the artificial (AI and A-Life) support two opposing intuitions concerning autonomy. One, characteristic of "classical" AI, is that determination of behaviour by the external environment lessens an agent's autonomy. The other, characteristic of A-Life and situated robotics, is that to follow a pre-conceived internal plan is to be a mere puppet (one can no longer say "a mere robot"). These intuitions can be reconciled, since autonomy is not an all-or- none property. Three dimensions of behavioural control are crucial: (1) The extent to which response to the environment is direct (determined only by the present state in the external world) or indirect (mediated by inner mechanisms partly dependent on the creature's previous history). (2) The extent to which the controlling mechanisms were self-generated rather than externally imposed. (3) The extent to which inner directing mechanisms can be reflected upon, and/or selectively modified. Autonomy is the greater, the more behaviour is directed by self-generated (and idiosyncratic) inner mechanisms, nicely responsive to the specific problem-situation, yet reflexively modifiable by wider concerns. An A-Life worker has said :"The field of Artificial Life is unabashedly mechanistic and reductionist. However, this new mechanism ... is vastly different from the mechanism of the last century." One difference involves the emphasis on emergent properties. Even classical AI goes beyond what most think of as "machines". The "reductionism" of artificiality denies that the only respectable concepts lie at the most basic ontological level. AI and A-Life help us to understand how human autonomy is possible.
Download plain text file