Research Projects

Multiobjective optimization problems (MOP), which considers optimizing several conflicting criteria or objectives simultaneously, naturally exists in many real-life applications. It is mathematically formulated as follows:

\[ \begin{equation} \begin{array}{l l} \mathrm{minimize} \quad \mathbf{F}(\mathbf{x})=(f_{1}(\mathbf{x}),f_{2}(\mathbf{x}),\cdots,f_{m}(\mathbf{x}))^{T}\\ \mathrm{subject\ to} \quad g_j(\mathbf{x})\geq 0,\quad j=1,\cdots,J\\ \mathrm{\ } \quad\quad\quad\quad h_k(\mathbf{x})=0,\quad k=1,\cdots,K\\ \mathrm{\ } \quad\quad\quad\quad \mathbf{x} \in\Omega \end{array} \label{equation:MOP} \end{equation} \]

where \(\Omega=\prod_{i=1}^n[a_i,b_i] \subseteq\mathbb{R}^n\) is the decision (variable) space, \(\mathbf{x}=(x_1,\ldots,x_n)^T \in\Omega\) is a candidate solution. \(\mathbf{F}:\Omega\rightarrow\mathbb{R}^m\) constitutes \(m\) conflicting objective functions, and \(\mathbb{R}^m\) is called the objective space. \(g\) is the inequality constraint and \(h\) is the equality constraint.

There are two basic requirements in EMO:

  • convergence: searching for solutions that approach the EF as much as possible

  • diversity: spread these solutions over the EF as uniform as possible

  • pertinence: focus on a partial region preferred by a decision maker (DM)

alt text 

My current research focuses on exploring ways of balancing convergence and diversity in evolutionary multiobjective optimization (EMO). Generally speaking, my research projects have been along two lines:

  • designing efficient and effective mechanisms to select elite solutions to survive into the next generation;

  • developing efficient and effective methods to generate useful offspring solutions.

This page is still under construction, I will complete each section in case I have time...

Selection Operator

alt text 

The basic idea of decomposition-based method, also know as multiobjective evolutionary algorithm based decomposition (MOEA/D), is to decompose the MOP, in question, into several subproblems (e.g., single-objective optimization problems or simplified MOPs) and uses a population-based method to optimize them simultaneously. From a unified perspective, different subproblems are respectively handled by collaborative agents. Selection, in MOEA/D, is a process of choosing solutions by agents. For an agent, its selected solution should optimize the underlying subproblem as much as possible (i.e., convergence); in the meanwhile, the selected solution should be different from the other agents’ choices as much as possible (i.e., diversity). Unfortunately, most, if not all, MOEA/D implementations only explicitly address the convergence requirement. Based on the aforementioned unified perspective, we presented a first attempt to explicitly address those two requirements simultaneously in selection operator design.
[Read More]

alt text 

Generally speaking, Pareto- and decomposition-based techniques argubly have complementary effects in selection. We provided two different ways (sequentially and simultaneously) of exploiting the advantages of both techniques. For the sequential hybrid strategy, the decomposition concept is generalized from single objective optimization to local diversity preservation. The selection process is conducted in a hierachical manner, which depends on Pareto dominance, local density estimation and scalarization functions, sequentially. To further improve the population diversity and provide additional selection pressure, solutions eliminated by Pareto dominance can be offered a second chance for survival in case it is associated with an isolated subregion. For the simultaneous way, we developed a dual-population paradigm to handle the convergence and diversity in two separate and co-evolving populations. In addition, a restricted mating selection mechanism is developed to coordinate the interaction between these two co-evolving populations.
[Read More]

Reproduction Operator

alt text 

Dilemma of exploitation and exploration has been well studied in game theory, known as multi-armed bandits problem. In this work, we developed an adaptive operator selection paradigm based on a bandit model. To track the dynamics of the search process, we devised a sliding window to record the performance of recent operator applications and introduced a decaying mechanism to increase the selection probability of the current best operator. Empirical results demonstrated the superiority of adaptive operator selection paradigm in balancing exploitation and exploration of the search process over random operator selection and static version.
[Read More]

alt text 

Taking advantages of the regularity property of continuous MOPs, we presented a first attempt to use manifold learning techniques to progressively learn a Pareto-optimal set representation at each generation. Then, new offspring solutions are interpolated from this approximated representation. Experimental results demonstrated that our proposed method is able to generate useful offspring when tackling problems with complicated search landscapes.
[Read More]

alt text 

To enhance the exploration ability of an EMO algorithm, we developed an adaptive local search mechanism, by using a jumping gene paradigm and segment-based search. For computational efficiency consideration, only non-dominated solutions eliminated by selection process are chosen for a second exploitation. Empirical studies demonstrated that our methods not only improve the convergence, but also enhances the population diversity.
[Read More]