It is well-known that certain learning methods (e.g. the perceptron learning algorithm) cannot acquire complete, parity mappings. But it is often overlooked that state-of-the-art learning methods such as C4.5 and backpropagation cannot generalise from incomplete parity mappings. The failure of such methods to generalise on parity mappings is sometimes dismissed as uninteresting on the grounds that is is 'impossible' to generalise over such mappings, or on the grounds that parity problems are mathematical constructs having little to do with real-world learning. However, this paper argues that such a dismissal is unwarranted. It shows that parity mappings are hard to learn because they are statistically neutral and that statistical neutrality is a property which we should expect to encounter frequently in real-world contexts. It also shows that the generalisation failure on parity mappings occurs even when large, minimally incomplete mappings are used for training purposes, i.e. when claims about the impossibility of generalisation are particularly suspect.
Download compressed postscript file