Evolutionary multi-objective optimization (EMO) has been flourishing for two decades in academia. However, the industry applications of EMO to real-world optimization problems are infrequent, due to the strong assumption that objective function evaluations are easily accessed. In fact, such objective functions may not exist, instead computationally expensive numerical simulations or costly physical experiments must be performed for evaluations. Such problems driven by data collected in simulations or experiments are formulated as data-driven optimization problems, which pose challenges to conventional EMO algorithms. Firstly, obtaining the minimum data for conventional EMO algorithms to converge requires a high computational or resource cost. Secondly, although surrogate models that approximate objective functions can be used to replace the real function evaluations, the search accuracy cannot be guaranteed because of the approximation errors of surrogate models. Thirdly, since only a small amount of online data is allowed to be sampled during the optimization process, the management of online data significantly affects the performance of algorithms. The research on data-driven evolutionary optimization has not received sufficient attention, although techniques for solving such problems are highly in demand. One main reason is the lack of benchmark problems that can closely reflect real-world challenges, which leads to a big gap between academia and industries.

In this competition, we carefully select 7 benchmark multi-objective optimization problems from real-world applications, including design of car cab, optimization of vehicle frontal structure, filter design, optimization of power systems, and optimization of neural networks. The objective functions of those problems cannot be calculated analytically, but can be calculated by calling an executable program to provide true black-box evaluations for both offline and online data sampling. A set of initial data is generated offline using Latin hypercube sampling, and a predefined fixed number of online data samples are set as the stopping criterion. This competition, as an event organized by the Task Force on Intelligence Systems for Health in the Intelligent Systems Application Technical Committee and Task Force on Data-Driven Evolutionary Optimization of Expensive Problems in the Evolutionary Computation Technical Committee, aims to promote the research on data-driven evolutionary multi-objective optimization by suggesting a set of benchmark problems extracted from various real-world optimization applications. All benchmark functions are implemented in MATLAB code. Also, the MATLAB code has been embedded in a recently developed software platform PlatEMO, an open source MATLAB-based platform for evolutionary multi- and many-objective optimization, which currently includes more than 50 representative algorithms and over 100 benchmark functions, along with a variety of widely used performance indicators.

**Test Problems**:

**DDMOP1**: This problem is a vehicle performance optimization problem, termed car cab design, which has 11 decision variables and 9 objectives.

The decision variables include the dimensions of the car body and bounds on nature frequencies, e.g., thickness of B-Pillar inner, thickness of floor side inner, thickness of door beam, and barrier height. Meanwhile, the nine objectives characterize the performance of the car cab, e.g., weight of the car, fuel economy, acceleration time, road noise at different speed, and roominess of the car.

**DDMOP2**: This problem aims at structural optimization of the frontal structure of vehicles for crash-worthiness, which involves 5 decision variables and 3 objectives. The decision variables include the thickness of five reinforced members around the frontal structure. Meanwhile, the mass of vehicle, deceleration during the full-frontal crash (which is proportional to biomechanical injuries caused to the occupants), and toe board intrusion in the offset-frontal crash (which accounts for the structural integrity of the vehicle) are taken as objectives, which are to be minimized.

**DDMOP3**: This problem is an LTLCL switching ripple suppressor with two resonant branches, which includes 6 decision variables and 3 objectives. This switching ripple suppressor is able to achieve zero impedance at two different frequencies. The decision variables are the design parameters of the electronic components, e.g., capacitors, inductors, and resistors. Meanwhile, the objectives of this problem involve the total cost of the inductors (which is proportional to the consume of the copper and economic cost) and the harmonics attenuations at two different resonant frequencies (which are related to the performance of the designed switching ripple suppressor).

**DDMOP4**: This problem is also an LTLCL switching ripple suppressor but with nine resonant branches, which includes 13 decision variables and 10 objectives. This switching ripple suppressor is able to achieve zero impedance at nine different frequencies. The decision variables are the design parameters of the electronic components, e.g., capacitors, inductors, and resistors. Meanwhile, the objectives of this problem involve the total cost of the inductors and the harmonics attenuations at nine different resonant frequencies.

**DDMOP5**: This problem is a reactive power optimization problem with 14 buses, termed RPOPS, which involves 11 decision variables and 3 objectives. The decision variables include the dimensions of the system conditions, e.g., active power of the generators, initial values of the voltage, and per-unit values of the parallel capacitor and susceptance. Meanwhile, the five objectives characterize the performance of the power system, e.g., active power loss, voltage deviation, reciprocal of the voltage stability margin, generation cost, and emission of the power system.

**DDMOP6**: This problem is a portfolio optimization problem, which has 10 decision variables and 2 objectives. The data consists of 10 assets with the closing prices in 100 minutes. Each decision variable indicates the investment proportion on an asset. The first objective denotes the overall return, and the second objective denotes the financial risk according to the modern portfolio theory.

**DDMOP7**: This problem is a neural network training problem, which has 17 decision variables and 2 objectives. The training data consists of 690 samples with 14 features and 2 classes. Each decision variable indicates a weight of the neural network with a size of 14*1*1. The first objective denotes the complexity of the network (i.e., ratio of nonzero weights), and the second objective denotes the classification error rate of the neural network.

**Download links**: https://github.com/HandingWang/DDMOP

**Important Dates**:

For participants planning to submit a paper to the 2019 IEEE Congress on Evolutionary Computation:

Paper submission: **7th January, 2019**

Notification to authors: 7th March, 2019

Final submission: 31st March, 2019

**Note**: You are encouraged to submit your paper to the Special Session on Data-Driven Optimization of Computationally Expensive Problems

For other participants (**only result entry but without a paper**):
**Results submission deadline: 30th April 2019**

**Note**: Please send your results directly to Dr Handing Wang (wanghanding.patch@gmail.com) or Dr Cheng He (chenghehust@gmail.com)

**Organizers:**:

Handing Wang, School of Artificial Intelligence, Xidian University, China

Cheng He, Department of Computer Science and Engineering, Southern University of Science and Technology, China

Ye Tian, Computer Science and Technology School, Anhui University

Yaochu Jin, Department of Computer Science, University of Surrey, UK

**Competition entries:**:

**CSEA**: Classification based surrogate-assisted evolutionary algorithm, Linqiang Pan, Cheng He, Ye Tian, Handing Wang, Xingyi, Zhang, and Yaochu Jin, Huazhong University of Science and Technology, Anhui University, University of Surrey

**KRVEA**: Kriging assisted reference vector guided evolutionary algorithm, Tinkle Chugh, Yaochu Jin, Kaisa Miettinen, Jussi Hakanen, Karthik Sindhya, University of Jyvaskyla, University of Surrey

**AGPEA-V1**: Accurate search based gaussian process model evolutionary algorithm(V1),Yitian Hong, Xidian University

**AGPEA-V2**: Accurate search based gaussian process model evolutionary algorithm(V2), Yitian Hong, Xidian University

**MOEAEC**: A Multiobjective Evolutionary Algorithm With An Ensemble Classifier For Expensive Multiobjective Optimization, Tian Lan, Xinye Cai, Nanjing University of Aeronautics and Astronautics Nanjing, China

**GP-MOEA-RVA-V1**: A Gaussian process assisted multiobjective evolutionary algorithm with reference vector association for expensive problems (V1), Zhenshou Song, Handing Wang, Chunquan Li Nanchang University, Xidian University

**GP-MOEA-RVA-V2**: A Gaussian process assisted multiobjective evolutionary algorithm with reference vector association for expensive problems (V2), Zhenshou Song, Handing Wang, Chunquan Li Nanchang University, Xidian University

**SANSGA-III**: A surrogate-assisted NSGA-III algorithm for Computationally Expensive Multi/Many-Objective Optimization problems Fan Li, Liang Gao,Weiming Shen, Xiwen Cai Huazhong University of Science and Technology

**HSMEA**: Hybrid surrogate-assisted many-objective evolutionary algorithm Ahsanul Habib, Hemant Kumar Singh, Tinkle Chugh, Tapabrata Ray, Kaisa Miettinen, University of New South Wales (UNSW),Canberra, Australia, University of Exeter, United Kingdom (UK), University of Jyvaskyla, Jyvaskyla, Finland

**Results in HV**:

HSMEA | MOEAEC | AGPEA-V1 | AGPEA-V2 | SANSGA-III | GP-MOEA-RVA-V1 | GP-MOEA-RVA-V2 | CSEA | K-RVEA | |
---|---|---|---|---|---|---|---|---|---|

DDMOP1 | 2.63E+07 | 1.20E+07 | 1.35E+07 | 1.89E+07 | 1.78E+07 | 1.75E+07 | 1.75E+07 | 1.81E+07 | 1.34E+07 |

DDMOP2 | 6.03E+03 | 2.24E+02 | 2.11E+02 | 2.21E+02 | 2.38E+02 | 2.30E+02 | 2.31E+02 | 2.27E+02 | 2.24E+02 |

DDMOP3 | 3.41E+02 | 3.43E+02 | 3.37E+02 | 3.33E+02 | 3.43E+02 | 3.37E+02 | 3.38E+02 | 3.34E+02 | 3.20E+02 |

DDMOP4 | 3.72E+21 | 3.76E+21 | 3.58E+21 | 3.55E+21 | 3.69E+21 | 3.76E+21 | 3.78E+21 | 3.59E+21 | 3.40E+21 |

DDMOP5 | 9.47E-03 | 9.18E-03 | 8.46E-03 | 8.41E-03 | 9.53E-03 | 8.42E-03 | 8.30E-03 | 8.90E-03 | 9.13E-03 |

DDMOP6 | 1.57E-10 | 1.30E-10 | 1.40E-10 | 1.43E-10 | 1.56E-10 | 1.46E-10 | 1.49E-10 | 1.30E-10 | 1.43E-10 |

DDMOP7 | 2.18E-01 | 3.25E-02 | 2.09E-01 | 1.88E-01 | 3.64E-02 | 2.02E-01 | 1.75E-01 | 2.16E-01 | 1.87E-01 |

**Winer algorithms**:

Winner: HSMEA

First runner-up: SANSGA-III

Second runner-up: GP-MOEA-RVA-V1