D. Cliff, P. Husbands, I. Harvey
We have developed a methodology grounded in two beliefs: that autonomous agents need visual processing capabilities, and that the approach of hand-designing control architectures for autonomous agents is likely to be superseded by methods involving the artificial evolution of comparable architectures. In this paper we present results which demonstrate that neural-network control architectures can be evolved for an accurate simulation model of a visually guided robot. The simulation system involves detailed models of the physics of a real robot built at Sussex; and the simulated vision involves ray-tracing computer graphics, using models of optical systems which could readily be constructed from discrete components. The control-network architecture is entirely under genetic control, as are parameters governing the optical system. Significantly, we demonstrate that robust visually-guided control systems evolve from evaluation functions which do not explicitly involve monitoring visual input. The latter part of the paper discusses work now under development, which allows us to engage in long-term fundamental experiments aimed at thoroughly exploring the possibilities of concurrently evolving control networks and visual sensors for navigational tasks. This involves the construction of specialised visual-robotic equipment which eliminates the need for simulated sensing.
Download postscript file