Research:

Joshua Knowles
Contact
News
Research
Publications
CV
Links

Bounded Pareto Archives

Theoretical studies on convergence in multiobjective evolutionary algorithms (MOEAs)

David Corne and I have published several articles on Pareto archives and bounded Pareto archives (see below). These concern a very important element of many multiobjective evolutionary algorithms (MOEAs): the procedures for updating the store of nondominated solutions discovered during search. The Pareto archive has several roles in an MOEA: it often takes part in the generation of new solutions (i.e. "parents" are drawn from it); it can be used to estimate the quality of new solutions (as in PAES); and it is the final output of the algorithm --- the equivalent of the best solution in a single-objective EA. Keeping a bound on the archive size can be important because the Pareto optimal set may be infinitely large, but also because updating and searching through the archive will become very time-consuming if it is allowed to grow without bound.

Several papers in the EMOO/MOEAs literature have given theoretical results on MOEA convergence, and define covergence in a number of ways - convergence to at least one Pareto optimal solution, convergence to the whole Pareto front, or just convergence to a stable set. Asymptotic convergence results such as these are relatively simple, particularly when the population or archive is unbounded. For example, asymptotic convergence to the whole Pareto front only requires that the mutation operator connects every point in the search space with some nonzero probability, and that all nondominated solutions are retained (as soon as a stored solution becomes dominated it may be discarded).

In contrast to these results, theoretical and convergence results for bounded archives are more involved and more interesting. Laumanns et al introduced the idea of analysing the "convergence" of a size-bounded archive in terms of how well it approximates the input sequence. That is, the generation of solutions is considered separately from the procedures for storing the solutions in the archive. This isolates the important theoretical question of how to update the archive. In the second (2004) paper below, we also pursue this question and give some theoretical limitations on what can be achieved with respect to the quality of the approximation and the number of solutions in the archive. The last paper below relates to the "No Free Lunch" theorem for optimizaton. In it, we show how archiving algorithms do effect optimizer performance, even when performance is averaged over "the set of all problems" or other sets that are closed under permutation.

Recent (2011) collaborative work with Manuel López-Ibáñez and Marco Laumanns looks further at the properties of six different archivers in the literature, including multi-level grid archiving (MGA), introduced by Laumanns in 2007. We propose that archivers guaranteeing a "better"-monotone update (roughly, that the sets of points cannot deteriorate over time) are to be preferred. Currently, only hypervolume-based archiving and MGA have this property, and we show empirically why these two are to be preferred over other methods. MGA is potentially valuable because it has polynomial complexity, whereas the update in hypervolume archiving is exponential in the objective space dimension.

Keywords:- hypervolume, hypervolume-based selection, multi-level grid archive, approximation quality, bounded archive, Pareto archive, archiving, nondominated solutions archive, secondary population, elite population, elite archive, epsilon dominance archive, adaptive grid archive, Lebesgue archive, external archive, external population, elitism, Joshua Knowles, David Corne, evolutionary computation, Pareto optimization, multiobjective optimization, multicriteria decision making (MCDM), convergence analysis, theory, MOEA, EMOO, MOO, NFL, no free lunch, EA.
Joshua D. Knowles