Tom Froese and Emmet Spier
Standard crossover operators are often omitted from simple genetic algorithms (GAs) used for optimizing artificial neural networks because of the traditional belief that they generally disrupt the distributed functionality of the evolving solutions. The notion that crossover will be especially disruptive when a genetic representation is used which has a many-to-one mapping between genotype and phenotype has become known as the 'permutation problem'. In contrast, this paper argues that these problems do not normally appear in practical use of simple GAs because populations converge quickly and then continue to move through search space in this converged manner until a fitness optimum is found. After convergence all individuals are genetically similar, and moreover, distinct genetic permutations of the same phenotypic solution are unlikely to co-exist in the population. Genetic convergence thus minimizes the possibility for disruption caused by crossover. We have termed this the 'convergence argument'. This claim is investigated experimentally on standard benchmark problems and the results provide empirical support.