Fractal Functions

A Single Objective Optimisation Problem

The Fractal Functions are designed to overcome a number of biases which exist in many functions traditionally used as benchmarks for real-valued optimisation.

The functions are currently being used for the Mountains or Molehills (Scale-invariant Optimisation) and Large Scale (High Dimensional) Global Optimisation competitions at WCCI2008 (CEC2008).

Many functions used for benchmarking are built from a number of elementary mathematical functions. These functions necessarily reduce in detail (or "smooth out") as you zoom in on the function, meaning that they also smooth out as a solution algorithm converges on a solution. This has the potential to favour some algorithms that might otherwise not perform so well. In particular, it tips the balance between exploration and exploitation. For a more extensive discussion of the biases that can occur in benchmarking and motivation for the functions see the first reference below.

The fractal functions, on the other hand, have the property of randomised self-similarity - as the resolution increases the average level of detail remains the same. This avoids many of these biases, and allows the balance between exploration and exploitation in solution functions to be examined more rigorously.

Each function is composed of a large number of base functions or *unit functions*, so called because their non-zero values lie within a unit hypercube. In order to preserve the self-similarity property, the average number of base functions in any unit hypercube in the fractal function increases with the inverse of the size to the power of the dimension.

Some examples of different 2-D functions, along with the unit function from which they are constructed, are shown below.

The first function above is continuous but not continuously differentiable. The second is both continuous and continuously differentiable. The third one is neither, and is only piece-wise continuous. All functions wrap in all dimensions.

The functions shown have a similar look to each other despite being generated by different base functions. This is because they have the same sequence number, as described below.

Each fractal function is determined by a base function, the *fractal depth* (how "deep" the self-similarity extends), the *density* (average number of base function applications per hypercube at any depth), and an *index* or sequence number, which seeds the pseudorandom number generator. If we fix the first three of these and vary the index, we call this a benchmarking *series*. The next three functions in the Sphere series are shown below.

We believe it is important to use a series of functions for benchmarking to avoid over-specialisation of algorithms (or over-training). As is common practice in machine learning, different problem instances from the series should be chosen for honing the algorithm (the "training set") than are used for testing its performance (the "test set").

For this reason, our competitions specify the series to be used, but the problem instances from within the series are randomly selected at test time.

This package contains everything you need to run the functions from Java or Matlab, and to participate in the competitions.

When unzipped it will create a directory named after the version number. This will contain everything that is needed in the "right place", including the Jar file, the example files, and the documentation (start by opening index.html in the main directory).

FractalFunctions-1.0RC1.zip | Version 1.0 Release Candidate 1 | 2nd June 2008 |

This project is now being open-sourced. The sources are not required to run the code, but may be of interest if you wish to contribute to this project.

So that we can keep single version branch for the competitions and comparisons in research papers, please submit any proposed enhancements, bug fixes, etc to Cara MacNish.

FractalFunctions-1.0RC1.src.zip | Version 1.0 Release Candidate 1 | 2nd June 2008 |

License: This is software is freely distributed under the GNU General Public License with no warranty. You are free to modify the software for your own use. If you pass it onto others you should clearly mark it as modified, and include this notice and a link back to this site.

The following code for the WCCI2008 Large Scale Optimisation Competition (Function 7 - FastFractal "DoubleDip") has been contributed by Ales Zamuda from the University of Maribor, Slovenia. It uses the GJC / CNI interface to run the Java code from C++.

cec08-f7-cpp-with-sources-1.0RC1.zip | Using Version 1.0RC1 | 2nd June 2008 |

MacNish, C., Towards Unbiased Benchmarking of Evolutionary and Hybrid Algorithms for Real-valued Optimisation, *Connection Science*, Vol.19, No. 4, December 2007. To appear. (Pre-review draft.)

Copyright notice: This is a preprint of an article submitted for consideration in Connection Science © 2007 Taylor & Francis; Connection Science is available online at: http://journalsonline.tandf.co.uk/.

MacNish, C. Benchmarking Evolutionary and Hybrid Algorithms using Randomized Self-Similar Landscapes, Proc. 6th International Conference on Simulated Evolution and Learning (SEAL'06), LNCS 4247, pp. 361-368, Springer, 2006

Performance | Reference |
---|