
Bounded Pareto Archives
Theoretical studies on convergence in multiobjective evolutionary algorithms (MOEAs)
David Corne and I have published several articles
on Pareto archives and bounded Pareto archives (see below). These concern a
very important element of many multiobjective evolutionary
algorithms (MOEAs): the procedures for updating the store of
nondominated solutions discovered during search.
The Pareto archive has several roles in an MOEA:
it often takes part in the generation of new solutions (i.e. "parents"
are drawn from it); it can be used to estimate the quality of new
solutions (as in PAES); and it is the final output of the algorithm
 the equivalent of the best solution in a singleobjective
EA. Keeping a bound on the archive size can be important because the Pareto
optimal set may be infinitely large, but also because updating and
searching through the archive will become very timeconsuming if it is
allowed to grow without bound.
Several papers in the EMOO/MOEAs literature have given theoretical
results on MOEA convergence, and define
covergence in a number of ways  convergence to at least one Pareto
optimal solution, convergence to the whole Pareto front, or just
convergence to a stable set. Asymptotic convergence results such as
these are relatively simple, particularly when the population
or archive is unbounded. For example,
asymptotic convergence to the whole Pareto front only requires
that the mutation operator connects every point in the
search space with some nonzero probability, and that all nondominated
solutions are retained (as soon as a stored solution becomes dominated it may
be discarded).
In contrast to these results, theoretical and convergence
results for bounded archives are more involved and more interesting.
Laumanns et al introduced the idea of analysing the "convergence" of a
sizebounded archive in terms of how well it approximates the input sequence.
That is, the generation of solutions is considered separately
from the procedures for storing the solutions in the archive. This isolates the important theoretical
question of how to update the archive. In the second (2004) paper below, we also
pursue this question and give some theoretical limitations on what can be
achieved with respect to the quality of the approximation and the number of
solutions in the archive. The last paper below relates to the
"No Free Lunch" theorem for optimizaton. In it, we show how
archiving algorithms do effect optimizer
performance, even when performance is averaged over "the set of all problems"
or other sets that are closed under permutation.
Recent (2011) collaborative work with Manuel LópezIbáñez and Marco
Laumanns looks further at the properties of six different archivers in the literature,
including multilevel grid archiving (MGA), introduced by Laumanns in 2007. We propose that
archivers guaranteeing a "better"monotone update (roughly, that the sets of
points cannot deteriorate over time) are to be preferred. Currently, only hypervolumebased
archiving and MGA have this property, and we show empirically why these two are to be
preferred over other methods. MGA is potentially valuable because it has polynomial
complexity, whereas the update in hypervolume archiving is exponential in the objective
space dimension.

 Manuel LópezIbáñez,
Joshua D. Knowles, and Marco Laumanns (2011) On Sequential Online
Archiving of Objective Vectors. In Evolutionary Multicriterion
Optimization (EMO 2011), Lecture Notes in Computer Science. Springer,
Heidelberg, Germany.
[ bibtex ]
[ software ]
Knowles, J.D. and Corne, D.W. (2004) Bounded Pareto archiving: theory and practice. X. Gandibleux, M. Sevaux, K. Sorensen and V. T'kindt (Eds.), Metaheuristics for Multiobjective Optimisation, Lecture Notes in Economics and Mathematical Systems, Volume 535, Springer. pp. 3964. BibTeX Abstract PDF
Knowles, J.D. and Corne, D.W. (2003) Properties of an adaptive archiving algorithm for storing nondominated vectors. IEEE Transactions on Evolutionary Computation, 7(2), pp. 100116. April. Draft version. Original available from IEEE Press.
Knowles, J.D. and Corne, D.W. and Fleischer, M. (2003) Bounded archiving using the Lebesgue measure. Proceedings of the IEEE Congress on Evolutionary Computation. Volume 4, pp. 24902497. IEEE Press. BibTeX Abstract PDF Note: the Lebesgue measure archive as described in this paper does not have the time complexity stated. Lyndon While recently proved that Fleischer's Lebesgue measure computation algorithm has complexity exponential in the number of objectives, not polynomial as originally claimed. However this does not substantially change the conclusions drawn in this paper.
Corne, D.W. and Knowles, J.D. (2003) Some multiobjective optimizers are better than others. Proceedings of the IEEE Congress on Evolutionary Computation. Volume 4, pp. 25062512. IEEE Press. BibTeX Abstract PDF

