What makes a problem hard for XCS?

Tim Kovacs

Manfred Kerber

School of Computer Science
The University of Birmingham
Birmingham B15 2TT England

1.  Introduction

Two basic questions to ask about any learning system are: to what kinds of problems is it well suited? To what kinds of problems is it poorly suited? Despite two decades of work, Learning Classifier Systems (LCS) researchers have had relatively little to say on the subject. This work attempts to answer questions like those above for Wilson's accuracy-based XCS [Wilson95a] applied to single step problems.

2.  Dimensions of problem difficulty

Wilson briefly investigated complexity in [Wilson98a] using a series of boolean multiplexer functions of string length 6, 11 and 20. Each of these functions has a minimal, complete, accurate, and non-overlapping representation using the standard ternary LCS alphabet, called its optimal population [O] [Kovacs2000z]. The size of the optimal populations (|[O]| for short) for these functions is 16, 32, and 64 respectively. By evaluating the number of inputs XCS needed to learn each function Wilson worked out an expression relating |[O]| and difficulty, and another relating string length and difficulty [Wilson98a]. Not surprisingly, functions of greater string length and |[O]| were more difficult. Our experience with multiplexer and parity functions indicates that larger |[O]| is more difficult even when string length is constant. Fig. 1 shows the difficulty of a series of 6 bit functions of increasing |[O]|, with performance measured as %[O] - the proportion of the optimal population present (see [Kovacs2000z] for details). In the following sections we discuss two additional, previously unknown, dimensions of problem complexity.

2.1.  The reward range

We expected the form of the reward function to have little effect on problem difficulty, because in XCS fitness is based on the accuracy with which a rule predicts rewards, and not on the magnitude of the reward prediction. However, results show the range of rewards - that is, the difference between the highest and lowest reward - has a potentially strong effect on problem difficulty [Kovacs2000z]. This is due to the use of reward range in the normalisation of a rule's prediction error in the XCS fitness updates [Wilson95a].

1. Difficulty in terms of %[O] increases with |[O]|, averaged over 1000 runs.

2.2.  Mean hamming distance

The hamming distance between two strings is defined as the number of bits which must be changed to transform one into the other. The mean hamming distance (MHD) of an [O] is the mean distance between all possible pairs of rules. Results show that functions with lower MHD are easier for XCS to learn [Kovacs2000z]. This effect is attributable to the greater ease of transforming more similar strings into each other by crossover and mutation: in [O]s with shorter MHD it is easier to move from one accurate general rule to another.

3.  Summary

We have elsewhere [Kovacs2000z] outlined an approach to the study of problem complexity for XCS, an approach limited to an important subset of problems. We've outlined a number of dimensions of problem complexity, some previously unknown. We've illustrated how a significant dimension, |[O]|, structures the space of functions and defines complexity classes within it. Based on this we've created a single step test suite template that's simple to describe and implement, and which scales to any length input string. We suspect that XCS can reliably learn any function from the class studied, given enough resources. So far a great deal of empirical evidence has been gathered to support this hypothesis, but no proof is available.


© Proceedings of the Third International Workshop on Learning Classifier Systems (IWLCS-2000), in the Joint Workshops of SAB 2000 and PPSN 2000, Tim Kovacs and Manfred Kerber
The URL of this page is http://www.cs.bham.ac.uk/~mmk/papers/00-IWLCS-WS.html.
Also available as pdf ps.gz bib .