This paper examines the basins of attraction of random Boolean networks, a very general class of discrete dynamical systems, in which cellular automata (CA) form a special sub-class. A reverse algorithm is presented which directly computes the set of pre-images (if any) of a network's state. Computation is many orders of magnitude faster than exhaustive testing, making the detailed structure of random network basins of attraction readily accessible for the first time. They are portrayed as diagrams that connect up the network's global states according to their transitions. Typically, the topology is branching trees rooted on attractor cycles. The homogeneous connectivity and rules of CA are necessary for the emergence of coherent space-time structures such as gliders, the basis of CA models of artificial life. On the other hand random Boolean networks have a vastly greater parameter/basin field configuration space capable of emergent categorisation. I argue that the basin of attraction field constitutes the network's memory; but not simply because separate attractors categorise state space - in addition, within each basin, sub-categories of state space are categorised along transient trees far from equilibrium, creating a complex hierarchy of content addressable memory. This may answer a basic difficulty in explaining memory by attractors in biological networks where transient lengths are probably astronomical. I describe a single step learning algorithm for re-assigning pre-images in random Boolean networks. This allows the sculpting of their basin of attraction fields to approach any desired configuration. The process of learning and its side effects are made visible. In the context of many semi-autonomous weakly coupled networks, the basin field/network relationship may provide a fruitful metaphor for the mind/brain.
This paper is not available online