Research - Ran Cheng
 Dr. Ran Cheng    Ph.D., Research Fellow CERCIA Group, School of Computer Science, University of Birmingham, United Kingdom, B15 2TT Email: ranchengcn (at) gmail.com Curriculum Vitae, Google Scholar, Research Gate

## [Home] [Research] [Publications] [Professional Activities] [Honors and Awards]

With the development of modern engineering designs and scientific research, optimization problems become increasingly complex. Most of these complex optimization problems can no longer be solved using traditional mathematical tools. Alternatively, there is an increasing interest in adopting computational intelligence (CI) techniques (a set of nature-inspired computational methodologies and approaches) to address these problems.

I am particularly interested in CI based optimization of complex real-world problems, e.g., large optimization problems. Related research topics include swarm intelligence, model based evolutionary algorithms, evolutionary multi-objective optimization, evolutionary multi-modal optimization, etc.

Without loss of generality, a minimization (or maximization) optimization problem can involve one or multiple objectives to be optimized, which can be formulated in a uniform manner as:

\begin{aligned} \min\limits_{\mathbf{x}} \quad & \mathbf{f}(\mathbf{x}) = (f_1(\mathbf{x}), f_2(\mathbf{x}), ..., f_M(\mathbf{x}) ), \\ \text{s.t.} \quad & \mathbf{x} \in X, \quad \mathbf{f} \in Y,\\ & g_i(\mathbf{x}) \le 0, i = 1,...,P \\ & h_j(\mathbf{x}) = 0, j = 1,...,Q \\ \end{aligned} \label{eq:problem}
where $$X \subseteq \mathbb{R}^D$$ is the decision space and $$\mathbf{x} = (x_1, x_2, ..., x_D) \in X$$ is the decision vector; $$Y \subseteq \mathbb{R}^M$$ is the objective space and $$\mathbf{f} \in Y$$ is the objective vector. The decision vector $$\mathbf{x}$$ is composed of $$D$$ decision variables while the objective vector $$\mathbf{f}$$ is composed of $$M$$ objective functions that map $$\mathbf{x}$$ from $$X$$ to $$Y$$. $$g_i(\mathbf{x})$$ and $$h_j(\mathbf{x})$$ are known as the inequality and equality constraints respectively. If the number of objective functions is only one, i.e., $$M = 1$$, the problem is often known as a single-objective optimization problem (SOP); while if there is more than one objective function in conflict with each other, i.e., $$M > 1$$, the problem is often known as a multi-objective optimization problems (MOP). Specially, if an MOP has more than three objectives, it is known as an many-objective optimization problem (MaOP) due to the particular challenges.

Large optimization problems generally refer to those involving a large number of decision variables and/or many objectives. To be specific, large optimization problems can be categorized into the following four types:

• L-SOPs: Large-scale Single-objective Optimization Problems [2] [3];
• L-MOPs: Large-scale Multi-objective Optimization Problems [4];
• MaOPs: Many-objective Optimization Problems [5] [6];
• L-MaOPs: Large-scale Many-objective Optimization Problems [7] [8].

It is worth noting that, L-SOPs are also known as large-scale optimization problems for short in the literature of single-objective optimization, which is, however, different from the definition as presented here.

## Swarm Intelligence

Swarm intelligence (SI) is the collective behavior of decentralized, self-organized systems, natural or artificial. Representative SI paradigms include particle swarm optimization, ant colony optimization, bee algorithms, etc.

## Particle Swarm Optimization [2] [3]

The particle swarm optimization (PSO) is a popular swarm intelligence paradigm widely used for solving single-objective optimization problems (SOPs). However, traditional PSO algorithms severely suffer from the curse of dimensionality, where the performance deteriorates rapidly with the increase of decision variables.

 Framework of CSO Swarm Sorting in SL-PSO

We explore the usage of the competition mechanism and the social learning mechanism between particles within one single swarm to further enhance the swarm diversity for large-scale optimization. Correspondingly, two PSO variants are proposed, i.e., the competitive swarm optimizer (CSO) and the social learning particle swarm optimization algorithm (SL-PSO). In CSO and SL-PSO, instead of learning from $$gbest(t)$$ or $$pbest_i(t)$$, each particle is made to learn from other better particles in the current swarm. While CSO is designed for solving L-SOPs only, SL-PSO, which is enhanced with a dimension-dependent adaptive parameter control strategy, is not only capable of handling large, but also able to solve low-dimensional SOPs proper. More importantly, compared to most existing PSO variants, the proposed CSO and SL-PSO are algorithmically very simple as there is not additional operator involved.

## Model Based Evolutionary Algorithms

Model based evolutionary algorithms (MBEAs) generally refer to machine learning related techniques applied to evolutionary computation, such as theories, algorithms, systems and applications. Related topics include estimation of distribution algorithms, Bayesian optimization algorithms, surrogate-assisted evolutionary computation, data-driven optimization, inverse modelling for multi-objective optimization, etc.

## Inverse Modelling for Multi-objective Optimization [4]

In multi-objective optimization, most multi-objective evolutionary algorithms (MOEAs) focus on the development of an effective fitness calculation or selection strategy when adapting single-objective EAs to solving MOPs. Not much attention has been paid to designing effective reproduction strategies that explicitly exploit the connectedness and regularity in the distribution of Pareto optimal solutions.

 Framework of IM-MOEA Decomposition of the Inverse Modelling

We have designed an inverse modeling based multi-objective evolutionary algorithm, IM-MOEA for short. Different from most existing MOEAs, the proposed IM-MOEA construct models that map non-dominated solutions from the PF in the objective space to the PS in the decision space. The original $$m$$-input $$n$$-output multivariate inverse model is decomposed into $$m\times n$$ univariate models, which significantly simplifies the model building and removes the need for dimensionality reduction. Each univariate inverse model is then realized by a Gaussian process}, which has the advantage of modeling both the global regularity and the local randomness in the distribution of the non-dominated solutions during the search. A random grouping method is introduced to reduce the number of needed inverse models, which considerably enhances the performance of the proposed IM-MOEA on L-MOPs that involve a large number of decision variables.

## Evolutionary Multi-objective Optimization

Evolutionary multi-objective optimization (EMO) generally refers to the methodologies of applying population based metaheuristics (e.g., EAs) to solving MOPs (or MaOPs). Related topics include many-objective optimization, preference articulation/integration and decision-making, constraint handling, etc.

## Reference Vector Guided Many-objective Optimization [5]

In many-objective optimization (MaOO), the usage of preferences will become particularly important, not only because the user may be interested in only part of Pareto optimal solutions, but also because it is less practical to achieve a representative subset of the whole Pareto front (PF) using an evolutionary algorithm of a limited population size.

 Reference Vectors Association of Candidate Solutions

Motivated by ideas in decomposition based approaches and the aim to achieve the preferred part of the PF when the number of objectives is large, we propose a reference vector guided evolutionary algorithm (RVEA) for solving MaOPs. There are several main new contributions in the proposed RVEA. Firstly, a scalarization approach, termed as the angle penalized distance (APD), is designed to dynamically balance convergence and diversity in many-objective optimization according to the number of objectives and the number of generations. Secondly, an adaptive strategy is proposed to adjust the reference vectors to deal with objective functions that are not well normalized. Thirdly, to deal with irregular (disconnected or degenerate) PFs, a reference vector regeneration strategy is also proposed. In addition, a preference articulation method is specifically tailored for the proposed RVEA.

## Many-objective Optimization of Hybrid Electric Vehicle (HEV) Control [6]

In hybrid electric vehicles (HEVs), the propulsion is provided by a combination of internal combustion engine (ICE) and electric motor (EM). As a key element in the design of HEVs, the energy management controller (controller for short hereafter) plays an important role in guaranteeing peak performance of the hybrid power unit.

 Basic Structure of an HEV Seven-objective HEV Controller Model

One of the most important objectives of a controller is to minimize the fuel consumption, and thus most existing work on optimal controller design has exclusively focused on this aspect. To achieve the optimal control with a focus on fuel consumption, some traditional single-objective optimization methods such as dynamic programming (DP) have been applied using a discretization of the decision variables in the control models and investigating all possible actions. However, apart from fuel consumption, there are also other factors that may influence the overall performance of HEVs. For example, the driving experience, including noise, vibration, and harshness (NVH), is an important criterion for most customers while the battery lifetime is a major concern for manufacturers. Considering that such additional factors may also have significant influence on the overall performance of HEVs, we have applied RVEA [5] to the optimization of a seven-objective HEV controller.

## A Benchmark Test Suite for Large-scale Multi- and Many-objective Optimization [7]

The interests in multi- and many-objective optimization have been rapidly increasing in the evolutionary computation community. However, most studies on multi- and many-objective optimization are limited to small-scale problems, despite the fact that many real-world multi- and many-objective optimization problems may involve a large number of decision variables.

 Structure of the LSMOP functions Correlations between Decision Variables and Optimization Objectives

To promote the research on large-scale multi- and many-objective optimization, we propose a set of generic test problems based on design principles widely used in the literature of multi- and many-objective optimization, called LSMOP test suite. In order for the test problems to be able to reflect challenges in real-world applications, we consider mixed separability between decision variables and non-uniform correlation between decision variables and objective functions. This is the first test suite for large-scale multi- and many-objective optimization in the literature.

## Decision Variable Clustering Based Large-scale Many-objective Optimization [8]

The current literature of evolutionary many-objective optimization is merely focused on the scalability to the number of objectives, while little work has considered the scalability to the number of decision variables. Nevertheless, many real-world problems can involve both many objectives and large-scale decision variables. To tackle such large-scale many-objective optimization problems, this paper proposes a specially tailored evolutionary algorithm based on a decision variable clustering method.

 Decision Variable Clustering

To begin with, the decision variable clustering method divides the decision variables into two types: convergence-related variables and diversity-related variables. Afterwards, to optimize the two types of decision variables, a convergence optimization strategy and a diversity optimization strategy are adopted. In addition, a fast non-dominated sorting approach is developed to further improve the computational efficiency of the proposed algorithm. To assess the performance of the proposed algorithm, empirical experiments have been conducted on a variety of large-scale many-objective optimization problems with up to 10 objectives and 5000 decision variables. Our experimental results demonstrate that the proposed algorithm has significant advantages over several state-of-the-art evolutionary algorithms in terms of the scalability to decision variables on many-objective optimization problems.

## Evolutionary Multi-modal Optimization

Multimodal optimization (MMO), which refers to single-objective optimization involving multiple optimal (or near-optimal) solutions, has attracted increasing interest recently. MMO is widely seen in real-world scenarios, where the decision-makings can be made on the basis of multiple optimal solutions of a given optimization problem. For example, in truss-structure optimization, where the optimization objective is the quality criterion (e.g. weight or reliability) of the truss structure and the decision variables can be the density or length of the truss members, it is likely that different values of the decision variables can lead to the same (or very close) fitness of the objective function. In such a scenario, the decision maker (DM) has to make decisions according to personal preferences.

## References

1. Ran Cheng. Nature Inspired Optimization of Large Problems, University of Surrey, UK, May 2016.
2. Ran Cheng and Yaochu Jin. A Competitive Swarm Optimizer for Large Scale Optimization. IEEE Transactions on Cybernetics, 45(2): 191-204, 2015. (Top 1% ESI Highly Cited Article) [PDF] [Matlab Code] [C Code]
3. Ran Cheng and Yaochu Jin. A Social Learning Particle Swarm Optimization Algorithm for Scalable Optimization. Information Sciences, 291: 43-60, 2015. (Top 1% ESI Highly Cited Article) [PDF] [Matlab Code] [C Code]
4. Ran Cheng , Yaochu Jin, Kaname Narukawa and Bernhard Sendhoff. A Multiobjective Evolutionary Algorithm using Gaussian Process based Inverse Modeling. IEEE Transactions on Evolutionary Computation, 19(6): 838-856, 2015. [PDF] [Matlab Code]
5. Ran Cheng, Yaochu Jin, Markus Olhofer and Bernhard Sendhoff. A Reference Vector Guided Evolutionary Algorithm for Many-objective Optimization. IEEE Transactions on Evolutionary Computation, 20(5): 773-791, 2016. (Top 5 Popular Article of IEEE Transactions on Evolutionary Computation in October 2016) [PDF] [Matlab Code] [Java Code]
6. Ran Cheng, Tobias Rodemann, Michael Fischer, Markus Olhofer, and Yaochu Jin. Evolutionary Many-objective Optimization of Hybrid Electric Vehicle Control: From General Optimization to Preference Articulation. IEEE Transactions on Emerging Topics in Computational Intelligence, 2017. (in press) [PDF]
7. Ran Cheng, Yaochu Jin, Markus Olhofer and Bernhard Sendhoff. Test Problems for Large-scale Multiobjective and Many-objective Optimization. IEEE Transactions on Cybernetics, 47(12): 4108-4121, 2017. [PDF] [Matlab Code]
8. Xingyi Zhang, Ye Tian, Ran Cheng* (Corresponding Author), and Yaochu Jin. A Decision Variable Clustering-Based Evolutionary Algorithm for Large-scale Many-objective Optimization. IEEE Transactions on Evolutionary Computation, 2016. (in press) [PDF] [Matlab Code]