An Uncertainty Measure for Prediction

An Uncertainty Measure for Prediction
of Non-Gaussian Process Surrogates

caie_hu@cug.edu.cn

Caie Hu
School of Mechanical Engineering and Electronic Information, China University
of Geosciences, Wuhan 430074, Chine
Sanyou Zeng*
School of Mechanical Engineering and Electronic Information, China University
of Geosciences, Wuhan 430074, Chine
Changhe Li*
School of Automation, China University of Geosciences, Wuhan 430074, Chine
Hubei Key Laboratory of Advanced Control and Intelligent Automation
for Complex Systems, Wuhan, 430074, Chine
Engineering Research Center of Intelligent Technology for Geo-Exploration,
Ministry of Education, Wuhan, 430074, Chine

sanyouzeng@gmail.com

changhe.lw@gmail.com

https://doi.org/10.1162/evco_a_00316

Abstrait

Model management is an essential component in data-driven surrogate-assisted evo-
lutionary optimization. In model management, the solutions with a large degree of
uncertainty in approximation play an important role. They can strengthen the explo-
ration ability of algorithms and improve the accuracy of surrogates. Cependant, there is
no theoretical method to measure the uncertainty of prediction of Non-Gaussian pro-
cess surrogates. To address this issue, this article proposes a method to measure the
uncertainty. In this method, a stationary random field with a known zero mean is used
to measure the uncertainty of prediction of Non-Gaussian process surrogates. Based on
experimental analyses, this method is able to measure the uncertainty of prediction of
Non-Gaussian process surrogates. The method’s effectiveness is demonstrated on a set
of benchmark problems in single surrogate and ensemble surrogates cases.

Mots clés

Evolutionary computation, data-driven evolutionary optimization, surrogate, model
management, Non-Gaussian process.

1

Introduction

Data-driven optimization problems usually involve objective and constraint functions
that are not available, and the evaluation of these functions is time-consuming and com-
plex. There are only small data from physical experiments, numerical simulations, ou
daily life, and the evaluation of these functions involves a number of computationally
expensive numerical simulations or costly physical experiments (Preen and Bull, 2016;
Wang et al., 2016; Jin et al., 2018).

*auteur correspondant.

Manuscript received: 19 Avril 2021; revised: 22 Avril 2021, 10 Janvier 2022, 23 Août 2022; accepted: 16
Septembre 2022.
© 2022 Massachusetts Institute of Technology

Evolutionary Computation 31(1): 53–71

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

/

e
d
toi
e
v
c
o
un
r
t
je
c
e

p
d

je

F
/

/

/

/

/

3
1
1
5
3
2
0
7
1
9
3
2
e
v
c
o
_
un
_
0
0
3
1
6
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

C. Hu, S. Zeng, and C. Li

Evolutionary algorithms (EAs) are population-based search methods that mimic
natural biological evolution and species’ social behavior. They are promising in solv-
ing non-convex, constrained, multiobjective, or dynamic problems (Michalewicz and
Schoenauer, 1996; Hart et al., 1998; Li et al., 2014; Zhang, Mei et al., 2021). Cependant,
most existing research on EAs usually assumes that the analytic objective and con-
straint functions are available, and evaluating these functions is cheap and simple.
Donc, EAs cannot be directly used to solve the data-driven optimization prob-
lems. Surrogate-assisted evolutionary algorithms (SAEAs) are considered to address
the limitation of EAs in solving these problems (Jin et al., 2000; Tong et al., 2019; Zhang,
Li et al., 2021; Wang et al., 2022). In SAEAs, many machine learning models can be
used as surrogates to approximate the exact functions, including polynomial regres-
sion (PR), Gaussian process (GP), artificial neural network (ANN), radial basis function
réseau (RBFN), support vector machine (SVM), and the ensemble of these surrogates.
A limited number of exact function evaluations are carried out, and a small amount of
data is used to train these surrogates (Braun et al., 2009; Jin et al., 2000; Chugh et al.,
2019).

For all surrogates mentioned above, GP is usually used (Emmerich et al., 2006;
Coelho and Bouillard, 2011; Chugh et al., 2016; Zhan and Xing, 2021). There is provided
prediction and uncertainty information by GP, which is important in SAEAs. Then the
existing infill sampling criteria can be used to guide the search of EAs and the update of
surrogates, such as the lower confidence bound (LCB) (Torczon and Trosset, 1998), le
expected improvement (EI) (Jones et al., 1998) and the probability of improvement (PoI)
(Ulmer et al., 2003). On the contrary, although many Non-Gaussian process (Non-GP)
surrogates can also provide a good prediction, they cannot provide the uncertainty of
prediction of surrogates. Dans ce cas, these Non-GP surrogates have significant limita-
tion: (1) Because there is no uncertainty information of prediction of surrogates, it is
hard to improve the exploration of EAs and the accuracy of surrogates; (2) The existing
infill sampling criteria cannot be used to guide the search of EAs.

It should be emphasized that the uncertainty information of prediction of surro-
gates plays an essential role in model management in SAEAs, because (1) solutions
with a large degree of uncertainty indicate that the fitness landscape around them has
not been well explored, and therefore the evaluation of these solutions is likely to find
a better solution (Branke and Schmidt, 2005); (2) evaluating these solutions can most
effectively improve the accuracy of surrogates (Jin, 2011).

Several methods are used to measure the uncertainty of prediction of Non-GP sur-
rogates. Par exemple, Bayesian neural networks can measure the uncertainty of predic-
tion of neural networks (Gal and Ghahramani, 2015). Cross-validation also can be used
to measure the uncertainty of prediction of surrogates (Hutter et al., 2019). Cependant,
there is a significant limitation for the two methods: the accuracy of uncertainty highly
depends on the size of training data. Cependant, there is not much training data in data-
driven optimization progress. Besides, the prior distribution also needs to be known
for Bayesian Neural Networks. Based on the limitation, the two methods will not be
investigated in the article.

In addition to the above methods, there are also three typical methods: (1) The dis-
tance from the solutions to the existing training data has been used as an uncertainty
measure in Branke and Schmidt (2005). Since ensemble surrogates have been proven
to provide uncertainty information, two methods have been proposed to measure the
uncertainty of prediction of ensemble surrogates. (2) The literature (Wang et al., 2017)
defined the uncertainty measurement to be the maximum difference between outputs

54

Evolutionary Computation Volume 31, Nombre 1

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

/

e
d
toi
e
v
c
o
un
r
t
je
c
e

p
d

je

F
/

/

/

/

/

3
1
1
5
3
2
0
7
1
9
3
2
e
v
c
o
_
un
_
0
0
3
1
6
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

An Uncertainty Measure for Prediction of Non-Gaussian Process Surrogates

of ensemble members. (3) The variance of predictions output by the base surrogates of
ensemble is used to estimate the uncertainty of prediction of ensemble surrogates (Guo
et coll., 2018).

Among the three methods above, the first method is a qualitative uncertainty mea-
surement method. En théorie, it is not able to accurately measure the uncertainty of pre-
diction of surrogates. Plutôt, it indicates only the crowded degree of the neighborhood
of a solution. The second method is the disagreement among the outputs of ensemble
members for the prediction of surrogates. This method was proposed based on Query-
by-Committee (QBC) in active learning, which shows that the query with the maxi-
mum disagreement strategy can efficiently enhance the accuracy of surrogates (Wang
et coll., 2017). En substance, this method describes the difference of predictions among en-
semble members in a solution. In the third method, the uncertainty of prediction of
surrogates is defined by the variance of predictions output by the base surrogates of
ensemble. It indicates the average squared deviation of the base surrogates about the
output of ensemble. In the probability and statistic viewpoint, these methods for mea-
suring the uncertainty of prediction of Non-GP surrogates are not a sound method.
These methods mentioned above cannot address one important issue: to measure the
uncertainty of prediction of Non-GP surrogates. Donc, it can be confirmed that
there is no a theoretical sound method to measure the uncertainty of prediction of
Non-GP surrogates.

To address the issue mentioned above, this article proposes an uncertainty measure
for the prediction (UMP) of Non-GP surrogates. This method can be written in the form
of a random field model. In detail, it consists of two components: regression function
(namely Non-GP surrogate) and residual variation (also known as uncertainty). Dans ce
method, two components are uncorrelated. In the first term, Non-GP surrogate as re-
gression function only depends on decision variables, and the second term represents
the uncertainty of prediction of Non-GP surrogate based on a stationary random field.
Ainsi, based on the random field model, the uncertainty of prediction of Non-GP sur-
rogate can be measured. Alors, the existing infill sampling criteria can be used to guide
the search of algorithms and the update of surrogates.

In this article, an uncertainty measure for the prediction of Non-GP Surrogates is
proposed to overcome the drawbacks of existing uncertainty methods. The main con-
tribution of this article can be summarized as follows:

(1) An uncertainty measure for the prediction of Non-GP surrogates is proposed,

which overcomes the drawbacks of existing uncertainty methods;

(2) The effectiveness of the proposed method is investigated on a set of benchmark
problems and analysed on Rastrigin function in both single surrogate and ensemble
surrogates cases. The experimental results demonstrate that the proposed method is
promising in solving data-driven optimization problems.

The rest of this article is structured as follows. Section 2 presents a brief review of
the used surrogates, ensemble surrogates, and infill sampling criterion in this article.
Section 3 presents the proposed uncertainty measure for the prediction of Non-GP sur-
rogates. Section 4 demonstrates and discusses experimental results. Enfin, Section 5
concludes the article with a summary and looks into the future work.

2 Related Work

Surrogates and infill sampling criteria are essential components in online surrogate-
assisted evolutionary algorithms. This section presents a brief review of surrogates and
the infill sampling criterion involved in this article.

Evolutionary Computation Volume 31, Nombre 1

55

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

/

e
d
toi
e
v
c
o
un
r
t
je
c
e

p
d

je

F
/

/

/

/

/

3
1
1
5
3
2
0
7
1
9
3
2
e
v
c
o
_
un
_
0
0
3
1
6
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

C. Hu, S. Zeng, and C. Li

2.1

Polynomial Regression

Polynomial regression adopts the statistical tools of regression and analysis of variance
to obtain the minimum variance of regression. It is widely used in approximating exact
objective and constraint functions. The formulation of the polynomial regression at any
untested x is defined as follows

ˆf (X) = β0

+ (cid:2)d

i=1βixi + (cid:2)d

i=1,j =1,i≤j βi,j xixj + (cid:2)d

i=1,j =1,k=1,i≤j ≤kβi,j,kxixj xk + · · ·

(1)

where β0, βi, βi,j , βi,j,k are the coefficients to be estimated, d is the dimension of prob-
lems; usually, the least square method (LSM) is often used to estimate these coefficients
in the surrogate.

2.2 Radial Basis Function Network

Like other neural networks, RBFN has an input layer, hidden layer, and output layer. Il
uses radial basis functions as its activation functions. In RBFN, the input layer is directly
connected to the hidden one, and the output of RBFN at an untested x has the following
expression

M.(cid:2)

ˆf (X) =

ωiψ ((cid:3) x − ci (cid:3)p ),

(2)

je = 1

where ψ is activation function; ci can be any point vector (par exemple., origin or center); M is the
number of nodes of the hidden layer; and ω is the unknown weights to be estimated,
which can be determined by LSM or backpropagation based on gradient descent; p is
norm.

2.3

Support Vector Machine

SVM is one of the popular surrogates based on statistical learning theory and is often
used as a surrogate by constructing a hyperplane in high-dimensional space. The SVM
at an untested x is expressed as

ˆf (X) = ωT φ(X) + b,
(3)
where φ(X) is feature vector; coefficient vector ω and coefficient b need to be estimated.
The unknown parameter ω and b can be obtained by optimizing a constrained op-
timization problem (Cristianini and Shawe-Taylor, 2000) based on observed values yi at
xi for i = 1, · · ·, N, which is shown as

min


⎪⎨

⎪⎩

st

(cid:3)ω(cid:3)2 + L

1
2

N(cid:2)

je = 1

(ξi + ξ (cid:4)
je )

yi − ωT φ(xi ) − b ≤ ε + ξi
ωT φ(xi ) + b − yi ≤ ε + ξ (cid:4)
je
ξ (cid:4)
je , ξi ≥ 0,

(4)

where L = 1.0 and ε = 0.1 are prespecified values in this article, and ξi and ξ (cid:4)
variables representing upper and lower constraints.

i are slack

2.4

Ensemble Surrogates

The ensemble surrogates have been proven to outperform most of the single surrogates.
They are able to generate more reliable predictions of fitness landscape of problems
than single surrogates (Liu et al., 2000; Queipo et al., 2005), when little is known about

56

Evolutionary Computation Volume 31, Nombre 1

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

/

e
d
toi
e
v
c
o
un
r
t
je
c
e

p
d

je

F
/

/

/

/

/

3
1
1
5
3
2
0
7
1
9
3
2
e
v
c
o
_
un
_
0
0
3
1
6
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

An Uncertainty Measure for Prediction of Non-Gaussian Process Surrogates

the problem to be optimized at hand. The prediction of ensemble surrogates ˆf ens (X) est
formulated as

ˆf ens (X) =

wi = 1,

(5)

K(cid:2)

K(cid:2)

wi ˆf i (X),

je = 1

je = 1

where ˆf i (X) represents the output of the ith member in the ensemble; K is the number
of members in the ensemble, K = 3 in this article; in this article, wi is the weight of the
ith member defined by

wi = 0.5 −

(cid:7)

ei
K
j =1 ej )

,

(6)

2(
where ei and ej are the root mean square error (RMSE) of the ith and j th member in the
ensemble, respectivement.

The ensemble surrogates also have been proven to provide uncertainty information
of prediction of ensemble surrogates, and two methods have been proposed to measure
the uncertainty information. The literature (Wang et al., 2017) defined the uncertainty
measurement to be the maximum difference between outputs of ensemble members, comme
shown in Eq. (7).

U (X) = max( ˆf i (X) − ˆf j (X)),
(7)
where the uncertainty U (X) at x is the maximum difference between the outputs of two
ensemble members ˆf i (X) and ˆf j (X).

The literature (Guo et al., 2018) used the variance of predictions output by the base
members of ensemble to estimate the uncertainty of prediction of ensemble surrogates,
as shown in Eq. (8).

U (X) = 1

K − 1

K(cid:2)

je = 1

( ˆf i (X) − ˆf ens (X))2.

(8)

2.5

Lower Confidence Bound

The LCB was suggested (Lewis et al., 2000; Emmerich et al., 2002) to select potential
candidate solutions, especially in solving multimodal optimization problems. LCB can
prevent premature convergence and enhance the search toward less explored regions
in search space. The expression of LCB is

fLCB (X) = ˆf (X) − ωˆs(X),
(9)
where ˆf (X) and ˆs(X) are prediction mean and variance (uncertainty degree) from sur-
rogates, respectivement; the parameter ω scales the impact of the variance; a reasonable
choice is ω = 2, which leads to a high confidence probability (autour 97%) (Emmerich
et coll., 2002).

3 Uncertainty Measure for Prediction of Non-GP Surrogates

We aim to address the issue that there is no theoretical method to measure the uncer-
tainty of prediction of Non-GP surrogates. Ainsi, an uncertainty measure for predic-
tion of Non-GP surrogates is proposed in this article. This method can be written in the
form of random field model. In detail, it consists of two components: regression func-
tion (namely Non-GP surrogate) and residual variation (the uncertainty of prediction
of Non-GP surrogate). In this method, the two components are uncorrelated. Dans le premier
term, Non-GP surrogate as regression function only depends on decision variables, et
the second term represents the uncertainty of prediction of Non-GP surrogate based on
a stationary random field.

Evolutionary Computation Volume 31, Nombre 1

57

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

/

e
d
toi
e
v
c
o
un
r
t
je
c
e

p
d

je

F
/

/

/

/

/

3
1
1
5
3
2
0
7
1
9
3
2
e
v
c
o
_
un
_
0
0
3
1
6
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

d(cid:2)

C. Hu, S. Zeng, and C. Li

3.1

Formulation for Uncertainty Measure for Prediction

The UMP is formulated as

F (X) = m(X) + (cid:7)(X),
(10)
where m(X) as a regression function can be any Non-GP surrogate or ensemble Non-GP
surrogates, (cid:7)(X) is a mean 0 random field with distribution N (0, p 2).

In this article, the UMP makes the assumptions in building a cheap surrogate for
an expensive function y = f (X), x ∈ Rd , F (X) ∼ N (m(X), p 2) is a random variable, et
(cid:7)(X) ∼ N (0, p 2). For any x, X(cid:4) ∈ Rd , the correlation between (cid:7)(X) et (cid:7)(X(cid:4)
), depends on
the distance between x and x(cid:4)
) in this article is shown
in Eq. (11).

. The correlation function c(X, X(cid:4)
(cid:9)

(cid:8)

c(X, X(cid:4) (cid:2) je ) = exp

θi|xi − x(cid:4)

je

|2

,

(11)

je = 1

where d is dimension of problems; θ = [θ1, · · ·, θd ]T measures the importance or activity
of the variable x.

3.1.1 Hyperparameter Estimation
In UMP, the hyperparameters σ 2, and θ can be determined by maximizing the log like-
lihood function based on observe values yi at xi (i = 1, · · ·, N ), which is shown as

− 1
2

[Nlog(2π σ 2) + log(det (C )) + ( y − m)T C −1( y − m)/p 2],

(12)

where m = (m(xi )), i = 1, · · ·, N, is a known N-dimensional column vector of Non-GP
surrogate among training data; C is a known N × N correlation matrix among training
data; y is a N-dimensional column observed vector among training data.

The estimation of σ 2 can be obtained by taking the partial derivative of Eq. (12) avec

respect to σ 2

ˆσ 2 = ( y − m)T C −1( y − m)
Substituting Eq. (13) into Eq. (12), the maximum of log likelihood over ˆσ 2 est
− Nlog2π ˆσ 2 + log(det (C )) + N
2
since Eq. (14) depends only on parameters within C, thus above the maximum of log
likelihood can be

(13)

(14)

N

,

.

−Nlog2π ˆσ 2 − log(det (C )),

(15)

Prediction Distribution

3.1.2
When all unknown hyperparameters are determined, then the prediction distribution at
any untested point x can be obtained by using conditional distribution. The uncertainty
(conditional variance) of prediction of Non-GP surrogates is

ˆs(X) = ˆσ 2[1 − r T C −1r],
(16)
where r is a known N × 1 correlation matrix of the untested point x with training data.

3.2

Instantiation of UMP Framework

In UMP, any Non-GP surrogate can be considered to be the first term of the UMP. Dans
this article, the first term will be instantiated with RBFN, QP, and SVM as a surrogate,
respectivement.

58

Evolutionary Computation Volume 31, Nombre 1

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

/

e
d
toi
e
v
c
o
un
r
t
je
c
e

p
d

je

F
/

/

/

/

/

3
1
1
5
3
2
0
7
1
9
3
2
e
v
c
o
_
un
_
0
0
3
1
6
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

An Uncertainty Measure for Prediction of Non-Gaussian Process Surrogates

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

/

e
d
toi
e
v
c
o
un
r
t
je
c
e

p
d

je

F
/

/

/

/

/

3
1
1
5
3
2
0
7
1
9
3
2
e
v
c
o
_
un
_
0
0
3
1
6
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

3.2.1 UMP with RBFN
The form of RBFN is described in Eq. (2). Ici, the cubic kernel function is used as its
activation function

ψ ((cid:3)x − c(cid:3)p ) = (cid:3)x − c(cid:3)3
p,

(17)

where c is a center point vector, p = 2 in this article.

In this article, 2d + 1 cubic kernel functions are considered, based on the suggestion
in Sprecher (1993). Based on the self-organizing method, 2d + 1 center point vectors are
obtained by k-means algorithm (MacQueen, 1967).

3.2.2 UMP with Quadratic Polynomial
The quadratic polynomial (QP, second-order polynomial) is one of the most widely used
polynomial regression models. Due to its simplicity and flexibility, QP is usually used
as a surrogate and has a wide range of applications in various fields of science and
engineering. It can be expressed as follows

i=1βixi + (cid:2)d
(18)
where β0, βi and βi,j are the coefficients to be estimated; x = [x1, . . . , xd ]T ; QP is imple-
mented using the Python tool-box (Pedregosa et al., 2011) in this article.

i=1,j =1,i≤j βi,j xixj ,

ˆf (X) = β0

+ (cid:2)d

3.2.3 UMP with SVM
SVM is one of the regression techniques that have been introduced in Section II. Dans ce
article, the radial basis kernel function is adopted in SVM

κ (X, X(cid:4)

) =< φ(x), φ(x(cid:4) ) >= exp(−c (cid:3)x − x(cid:4)(cid:3)2),

(19)

where the SVM is carried out using the Python tool-box (Pedregosa et al., 2011), le
parameters γ is set as

in this article.

(cid:4)scale(cid:4)

3.3 Workflow of UMP

The pseudocode for the workflow of UMP is presented in the Algorithm 1. Initially,
11d − 1 samples in the search space are generated using Latin hypercube sampling

Evolutionary Computation Volume 31, Nombre 1

59

C. Hu, S. Zeng, and C. Li

(LHS) (Stein, 1987) and evaluated by exact functions. Then these samples are archived in
an initial database. τ latest samples in the database are selected as training data to train
the Non-GP surrogate. The Non-GP surrogate replace the exact functions in evolving a
population of NP individuals for T generations with a DE. Then a potential candidate
solution is selected in the population by using LCB and evaluated by exact functions.
After that, the solution is added to the database. Enfin, when the computational bud-
get is exhausted, the best solution in the database is chosen as the output.

4 Results and Discussion

To investigate the performance of the proposed UMP, a set of experiments is carried
out in both single and ensemble surrogates by Algorithm 1, respectivement. The Non-GP
surrogates involving RBFN, QP, and SVM are considered in this article.

For the single surrogate, two experiments are carried out. D'abord, the experiment com-
pares Non-GP surrogates with and without UMP, and they are named UMP/RBFN,
UMP/QP, UMP/SVM, RBFN, QP, and SVM, respectivement. Deuxième, the UMP compares
with the existing uncertainty method in Branke and Schmidt (2005), which is the dis-
tance from the solutions to the existing training data (DUM). The three algorithms with
the UMP are named UMP/RBFN, UMP/QP, and UMP/SVM, and the compared algo-
rithms are named DUM/RBFN, DUM/QP, and DUM/SVM.

Regarding ensemble surrogates, the proposed UMP compares with method Uens
which is the maximum difference between the outputs of the ensemble members (Wang
et coll., 2017) and VUM which is variance of predictions output by the base surrogates of
the ensemble (Guo et al., 2018), respectivement. In this article, the ensemble surrogates
consist of three surrogates: RBFN, QP, and SVM. These algorithms with the proposed
method and two compared methods are named UMP/ensemble, Uens/ensemble, et
VUM/ensemble, respectivement.

4.1

Parameter Settings

There are several parameters in experiments. The setting of these parameters is given
below.

(1) The computational budget with exact function evaluations F Es = 100 was per-
formed in this article, based on the assumption that the optimization algorithm is only
allowed to evaluate a small number of candidate solutions during optimization. Le
number of the run was 25.

(2) DE parameters: DE/rand/1/bin was employed in this article. The evolution
generations T = 100, population size NP = 20, scaling factor F = 0.5, and the crossover
rate CR = 0.9.

(3) Initial samples 11d − 1 were randomly generated by LHS.
(4) The range of values for parameters θ was [1.0e − 6, 20].
(5) Training data τ = 50 for dimension d = 2, τ = 11d − 1 for d = 5, 10 was consid-
ered. τ training data in the database was selected under considering both the quality
and the computational cost of Non-GP surrogate.

4.2

Test Problems

The effectiveness of the proposed method is verified on benchmark problems CEC 2014
(Liu et al., 2014) avec 2, 5, et 10 dimensions. The benchmark problems are listed in
Tableau 1.

60

Evolutionary Computation Volume 31, Nombre 1

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

/

e
d
toi
e
v
c
o
un
r
t
je
c
e

p
d

je

F
/

/

/

/

/

3
1
1
5
3
2
0
7
1
9
3
2
e
v
c
o
_
un
_
0
0
3
1
6
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

An Uncertainty Measure for Prediction of Non-Gaussian Process Surrogates

Problem

Objective function name

f ∗

Property

Tableau 1: Test problems.

F 1
F 2
F 3
F 4
F 5
F 6
F 7
F 8

Shifted Sphere
Shifted Ellipsoid
Shifted and Rotated Ellipsoid
Shifted Step
Shifted Ackley
Shifted Griewank
Shifted and Rotated Rosenbrock
Shifted and Rotated Rastrigin

Unimodal
Unimodal
Unimodal
Unimodal, Discontinuous
Multi-modal
Multi-modal

0
0
0
0
0
0
0 Multi-modal with very narrow valley
0

Very complicated multi-modal

4.3 Comparison on Single Non-GP Surrogate

Effect of the UMP

4.3.1
To investigate the effectiveness of the proposed UMP, a set of comparative experiments
are carried out for algorithms with and without UMP. Tableau 2 presents all algorithms’
average best fitness values on test problems with 2, 5, et 10 dimensions. Figures 1, 2,
et 3 present the comparison of convergence curves of different algorithms on F 1, F 5,
and F 8 test problems with different dimensions, respectivement.

From Table 2 and Figures 1–3, the results of UMP/RBFN, UMP/QP, and UMP/SVM
are significantly better than those of the other algorithms in most test problems. The per-
formance of the three algorithms (UMP/RBFN, UMP/QP, and UMP/SVM) is always
better than the other on both unimodal or multimodal problems, en particulier dans 5- et
10-dimensional test problems. The results are mainly attributed to the fact that there
is uncertainty information in the three algorithms that can be used to guide the search
of algorithms and the update of surrogates. The test problems are more complex as
the number of dimensions increases, and there are many local optima for multimodal
problems. The uncertainty information is able to strengthen the exploration ability of
algorithms and improve the accuracy of the surrogates.

4.3.2 Comparison with Peer Algorithms
To further investigate the effectiveness of the proposed UMP, the proposed UMP is
compared with the uncertainty method DUM, which is shown in Eq. (20). Tableau 3
presents the average best fitness values obtained by proposed algorithms UMP/RBFN,
UMP/QP, and UMP/SVM, and the compared algorithms DUM/RBFN, DUM/QP, et
DUM/SVM. Figures 4, 5, et 6 present the comparison of convergence curves of dif-
ferent algorithms on F 1, F 5, and F 8 with different dimensions, respectivement.

U (X) =

1

(cid:7)
L
je = 1

,

1
dx x

(cid:2)
je

(20)

where U (X) represents the uncertainty of prediction of a solution x, dx x(cid:2)
i is the Euclidean
distance from solution x to solution x(cid:2)
i in the training data τ , and L is the number of
solutions in the neighborhood used for estimation; L is equal to the number of training
data τ in this article.

The Table 3 and Figures 4–6 show that UMP/RBFN, UMP/QP, and UMP/SVM
achieve significantly better performance than the other algorithms on most test prob-
lems. There is a significant difference in 10 dimensional test problems, especially for F 3,

Evolutionary Computation Volume 31, Nombre 1

61

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

/

e
d
toi
e
v
c
o
un
r
t
je
c
e

p
d

je

F
/

/

/

/

/

3
1
1
5
3
2
0
7
1
9
3
2
e
v
c
o
_
un
_
0
0
3
1
6
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

C. Hu, S. Zeng, and C. Li

.

1
1
5
5
1
±
7
0
7
8
0

.

.

.

6
1
3
9
6
1
±
9
1
7
1
5
2

4
7
3
7
7
3
±
6
0
0
2
7
8

.

.

.

4
0
9
8
8
±
7
4
7
2
5

.

.

2
5
0
1
2
8
±
0
1
3
4
3
7
1

.

.

5
7
0
1
8
6
2
±
0
3
5
1
5
5
5

.

.

6
4
9
0
7
±
5
5
0
4
5

.

.

0
5
5
2
5
2
±
8
9
8
9
8
3

.

.

5
1
5
8
5
9
7
±
1
0
2
7
1
5
4
2

.

.

4
4
1
6
0
±
0
0
8
6
0

.

.

.

8
9
3
6
4
2
±
0
0
0
2
6
2

4
5
7
7
4
3
±
0
0
6
1
6
7

.

.

.

2
1
2
2
0
±
8
0
3
2
0

.

.

8
9
9
2
9
7
±
6
6
7
1
6
2
1

.

.

4
4
6
0
5
6
3
±
3
9
0
1
9
1
7

.

.

9
4
5
2
2
±
3
0
7
1
3

.

.

8
0
3
0
7
±
3
4
4
6
2
2

.

.

1
1
0
0
2
1
±
8
2
3
2
0
9

.

.

.

.

4
2
0
1
1
±
9
7
0
2
2

7
3
7
4
2
±
7
8
7
3
6

6
9
4
4
1
±
1
4
3
8
9

.

.

.

.

.

6
3
8
1
0
±
1
7
9
1
0

4
5
2
3
3
±
8
7
2
9
3

.

.

.

8
2
3
9
6
±
5
6
2
4
8
1

.

.

.

.

9
2
8
6
0
±
6
2
4
8
.
1

0
4
6
5
0
±
2
1
1
5
.
3

6
0
8
3
0
±
4
3
5
7
.
3

.

.

.

1
0
4
0
0
±
2
0
6
0
.
0

2
3
4
1
0
±
7
9
3
5
.
0

9
5
1
0
1
±
7
3
2
7
.
3

.

.

0
1
0
0
0
±
1
1
0
0
.
0

5
8
1
0
0
±
7
0
3
0
.
0

.

7
2
5
1
6
1
3
±
9
0
7
6
.
9
5
5

.

0
0
±
0
.
0

.

8
8
6
0
1
±
0
0
6
7
.
0

.

6
8
9
9
0
1
±
0
0
2
5
.
3
2

.

4

E
8
4
7
6
7
±
4

E
6
8
9
0
.
6

.

9
6
1
0
0
±
4
6
2
0
.
0

.

7
6
9
4
1
1
±
2
6
9
5
.
1
2

1
3
0
8
.
2
±
2
7
7
8
.
2

7
3
3
7
.
3
2
±
1
7
3
0
.
4
5

4
7
7
2
.
6
7
±
0
0
7
0
.
7
8
2

4

E
5
0
3
4
.
2
±
4

E
2
6
4
4
.
1

2
1
1
7
.
0
±
0
4
6
7
.
0

5
2
3
5
.
1
2
±
1
1
6
9
.
7
6

.

3

E
3
3
4
3
1
±
3

E
5
4
4
0
.
1

.

8
6
4
0
0
±
9
9
7
0
.
0

.

2
3
7
2
3
5
±
3
2
8
8
.
9
0
1

5
6
8
6
.
2
±
9
5
7
8
.
2

3
8
1
8
.
8
6
±
0
6
1
4
.
5
5
1

6
5
4
2
.
3
9
3
±
4
0
2
5
.
5
2
4
1

4

E
0
1
8
5
.
2
±
4

E
3
8
3
0
.
3

4
5
2
1
.
9
1
±
4
1
6
0
.
4
1

2
8
4
6
.
7
9
±
9
9
3
0
.
7
0
3

.

2
4
1
0
0
±
1
3
1
0
.
0

.

5
4
3
5
7
2
±
3
4
5
1
.
7
6

.

1
8
1
1
6
5
3
±
7
4
8
1
.
8
0
7

2
1
7
0
.
0
±
3
7
7
0
.
0

3
9
3
1
.
0
3
±
4
8
3
0
.
2
7

4
6
3
2
.
8
2
3
±
2
6
5
8
.
8
2
7

7
1
4
2
.
1
±
5
2
0
3
.
1

7
1
4
5
.
1
2
±
4
9
0
0
.
9
3

3
2
8
9
.
3
0
5
±
0
2
6
1
.
1
2
9
1

9
2
0
1
.
2
±
0
0
4
2
.
2

5
1
7
5
.
4
2
±
0
0
0
0
.
9
4

5
0
3
1
.
7
7
±
0
0
0
8
.
0
3
2

1
4
6
2
.
2
±
3
4
3
0
.
8

1
8
4
9
.
1
±
5
2
8
1
.
6
1

9
6
1
0
.
1
±
2
8
6
4
.
8
1

5
2
9
7
.
0
±
8
1
6
1
.
1

4
4
2
6
.
7
±
9
8
8
7
.
6
1

7
8
8
1
.
1
2
±
0
7
5
7
.
5
7

0
2
3
0
.
0
±
3

E
2
1
8
9
.
9

7
5
4
0
.
5
±
9
5
7
6
.
5

9
3
8
8
.
2
8
1
±
7
7
3
9
.
7
7
4

0
.
0
±
0
.
0

9
9
7
9
.
4
±
0
0
0
4
.
4

5
8
3
7
.
2
2
±
0
0
0
2
.
5
6

0
2
6
8
.
0
±
6
7
4
7
.
2

2
1
9
.
0
±
8
2
9
0
.
4

0
7
5
2
.
6
±
6
3
8
6
.
0
1

5
6
1
2
.
0
±
1
5
8
4
.
0

6
8
9
2
.
0
±
8
1
1
1
.
1

7
1
6
4
.
4
±
9
3
5
2
.
4
1

1
7
6
0
.
0
±
2
3
7
0
.
0

9
4
1
6
.
4
3
±
0
3
9
0
.
6
6

8
1
8
2
.
9
5
1
±
6
0
9
6
.
3
5
3

8
1
0
1
.
0
±
2
4
6
0
.
0

6
6
9
1
.
0
±
8
8
2
2
.
0

7
2
1
0
.
5
±
7
2
9
6
.
9

6
5
1
0
.
0
±
3

E
6
0
8
8
9

.

8
2
3
0
.
0
±
1
3
3
0
.
0

6
6
0
0
.
1
±
3
0
8
9
.
1

9
2
0
5
.
0
±
2
4
3
2
.
0

0
0
2
7
.
1
±
3
3
9
3
.
2

0
2
1
3
.
1
5
±
5
1
0
7
.
9
2
1

2
4
3
0
.
0
±
2
8
1
0
.
0

3
9
0
3
.
0
±
7
5
0
4
.
0

4
8
4
5
.
6
1
±
4
3
4
4
.
2
3

0
1
2
8
.
1
±
1
9
9
2
.
1

4
2
6
6
.
4
±
6
5
0
2
.
3

8
1
2
8
.
8
7
3
±
0
6
3
7
.
3
2
8

5
1
9
0
.
0
±
7
3
4
0
.
0

1
1
9
4
.
0
±
1
1
6
4
.
0

8
2
6
7
.
4
8
±
8
1
5
0
.
3
4
2

0
.
0
±
0
.
0

0
0
0
4
.
0
±
0
0
0
2
.
0

5
8
1
3
.
4
±
0
0
2
5
.
9

0
.
0
±
0
.
0

9
4
2
3
.
0
±
0
0
2
1
.
0

8
9
9
0
.
1
±
0
0
2
5
.
2

8
3
4
7
.
4
±
9
6
9
4
.
0
1

7
4
8
5
.
1
±
4
7
7
9
.
2
1

3
3
6
1
.
1
±
9
8
1
2
.
3
1

1
5
8
0
.
1
±
4
4
6
7
.
2

2
9
3
7
.
0
±
8
0
2
6
.
3

4
9
6
3
.
0
±
6
1
2
2
.
4

5
4
5
3
.
0
±
0
0
6
1
.
0

9
3
4
8
.
0
±
1
2
1
7
.
0

4
6
3
1
.
0
±
3
0
1
2
.
1

5
3
0
0
.
0
±
0
1
4
1
.
0

2
0
3
2
.
0
±
2
9
0
6
.
0

7
3
8
0
.
0
±
5
8
1
1
.
1

0
7
6
2
.
0
±
1
6
8
2
.
0

0
8
6
2
.
7
4
±
6
2
1
2
.
0
7

1
4
5
3
.
5
2
1
±
9
3
0
0
.
6
1
3

5
5
0
1
.
0
±
9
1
2
1
.
0

0
3
4
4
.
0
3
±
9
8
0
3
.
3
4

3
0
7
2
.
5
6
±
3
0
6
2
.
4
0
2

.

5
3
8
9
1
±
2
4
5
3
.
2

.

4
0
8
3
4
±
7
9
7
0
.
1
2

.

0
5
0
7
3
1
±
4
4
1
4
.
2
7

4
0
5
3
.
1
±
1
7
6
8
.
3

0
3
2
6
.
5
±
6
7
1
2
.
9
2

9
4
3
0
.
6
1
±
6
2
0
6
.
6
9

9
5
4
0
.
2
±
9
4
5
7
.
3

2
0
9
7
.
7
±
4
1
3
5
.
2
2

5
8
5
9
.
6
1
±
0
6
0
0
.
6
7

5
9
2
0
.
3
±
2
1
4
3
.
3

9
1
0
4
.
4
±
9
6
7
3
.
9
1

0
4
4
4
.
6
±
6
3
2
9
.
9
5

1
6
9
4
.
1
±
4
7
2
1
.
3

1
0
2
9
.
3
±
6
8
7
5
.
8
1

5
3
5
4
.
1
1
±
9
8
9
4
.
9
5

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

/

e
d
toi
e
v
c
o
un
r
t
je
c
e

p
d

je

F
/

/

/

/

/

3
1
1
5
3
2
0
7
1
9
3
2
e
v
c
o
_
un
_
0
0
3
1
6
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

3
8
4

.

2
0
2

.

6
4
.
5

5
3
.
3

5
3
.
3

8
9
.
1

k
n
un
r

e
g
un
r
e
v
UN

d

2

5

0
1

2

5

0
1

2

5

0
1

2

5

0
1

2

5

0
1

2

5

0
1

2

5

0
1

2

5

0
1

m
e
je
b
o
r
P.

1
F

2
F

3
F

4
F

5
F

6
F

7
F

8
F

M.
V
S

M.
V
S
/
P.
M.
U

P.
Q

P.
Q
/
P.
M.
U

N
F
B
R.

N
F
B
R.
/
P.
M.
U

.

M.
V
S

d
n
un
P.,
Q

,

N
F
B
R.

,

M.
V
S
/
P.
M.
U
P.,
Q
/
P.
M.
U

,

N
F
B
R.
/
P.
M.
U

F
o

)
d
t
S
±
g
v
UN
s
un

n
w
o
h
s
(

s
e
toi
je
un
v

s
s
e
n
t
fi

s
e
g
un
r
e
v
un

e
h
t

g
n
je
r
un
p
m
o
C

:
2

e
je
b
un
T

62

.
t
s
e
t
n
un
m
d
e
je
r
F
un
g
n
je
s
toi
d
e
t
un
toi
je
un
v
e

s
t
je
toi
s
e
r

t
n
un
c
fi
je
n
g
je
s
oui
je
je
un
c
je
t
s
je
t
un
t
S

Evolutionary Computation Volume 31, Nombre 1

An Uncertainty Measure for Prediction of Non-Gaussian Process Surrogates

Chiffre 1: Comparison of with and without UMP on convergence curves for F 1, F 5, et
F 8 on 2d, respectivement.

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

Chiffre 2: Comparison of with and without UMP on convergence curves for F 1, F 5, et
F 8 on 5d, respectivement.

/

/

e
d
toi
e
v
c
o
un
r
t
je
c
e

p
d

je

F
/

/

/

/

/

3
1
1
5
3
2
0
7
1
9
3
2
e
v
c
o
_
un
_
0
0
3
1
6
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

Chiffre 3: Comparison of with and without UMP on convergence curves for F 1, F 5, et
F 8 on 10d, respectivement.

F 5, and F 7, which maybe due to the accuracy of DUM that is unreliable. Ainsi, the DUM
cannot efficiently guide the search of algorithms when there are many local optima on
F 5 and F 7 in 10-dimensional test problems, and it cannot accurately fit the fitness land-
scape of F 3, F 5, and F 7 in 10-dimensional test problems. Plutôt, the proposed UMP
still has good performance in these problems with 10 dimension.

A comparison experiment is carried out to analyze further the proposed method
on F 8, with the results shown in Figure 7. In the figure, the RBFN surrogate is used
as a regression function to approximate the exact function, and the UMP and DUM
methods are used to estimate the uncertainty of prediction of RBFN. From Figure 7a,
there is a significant error between the prediction values of RBFN and the exact function.
The proposed UMP is able to measure the error more accurately than DUM. Besides,
the values adjusting to the prediction of RBFN from the proposed UMP are better at

Evolutionary Computation Volume 31, Nombre 1

63

C. Hu, S. Zeng, and C. Li

P.,
Q
/
M.
U
D

,

N
F
B
R.
/
M.
U
D

,

M.
V
S
/
P.
M.
U
P.,
Q
/
P.
M.
U

,

N
F
B
R.
/
P.
M.
U

F
o
)
d
t
S
±
g
v
UN
s
un
n
w
o
h
s
(

s
e
toi
je
un
v
s
s
e
n
t
fi
s
e
g
un
r
e
v
un
e
h
t
g
n
je
r
un
p
m
o
C

:
3
e
je
b
un
T

64

.
t
s
e
t
n
un
m
d
e
je
r
F
un
g
n
je
s
toi
d
e
t
un
toi
je
un
v
e

s
t
je
toi
s
e
r

t
n
un
c
fi
je
n
g
je
s
oui
je
je
un
c
je
t
s
je
t
un
t
S

.

M.
V
S
/
M.
U
D
d
n
un

M.
V
S
/
M.
U
D

M.
V
S
/
P.
M.
U

P.
Q
/
M.
U
D

P.
Q
/
P.
M.
U

N
F
B
R.
/
M.
U
D

N
F
B
R.
/
P.
M.
U

.

2
3
2
0
0
±
0
8
1
0
0

.

.

.

5
5
8
5
5
2
±
5
2
0
6
6
2

1
3
3
7
9
2
±
3
4
4
9
4
8

.

.

.

2
9
3
7
0
±
0
2
6
3
0

.

.

.

7
4
0
1
4
2
1
±
3
5
9
4
8
4
1

8
0
4
0
0
8
1
±
4
1
5
3
8
3
6

.

.

.

4

E
8
4
7
6
7
±
4

E
6
8
9
0
.
6

.

9
6
1
0
0
±
4
6
2
0
.
0

.

7
6
9
4
1
1
±
2
6
9
5
.
1
2

.

3

E
5
4
4
0
1
±
3

E
9
9
4
1
.
1

.

8
6
4
0
0
±
9
9
7
0
.
0

.

2
3
7
2
3
5
±
3
2
8
8
.
9
0
1

1
6
6
3
.
3
±
2
4
4
1
.
4

3
2
6
2
.
5
6
±
6
8
6
5
.
0
4
1

3
0
8
4
.
0
3
1
±
7
8
7
5
.
7
7
4

7
7
2
7
.
6
±
3
0
9
7
.
6

7
5
1
3
.
6
4
1
±
9
7
3
6
.
5
6
3

0
2
5
5
.
0
1
5
±
5
0
2
1
.
4
8
4
2

4

E
5
0
3
4
.
2
±
4

E
2
6
4
4
.
1

2
1
1
7
.
0
±
0
4
6
7
.
0

5
2
3
5
.
1
2
±
1
1
6
9
.
7
6

4

E
0
1
8
5
.
2
±
4

E
3
8
3
0
.
3

4
5
2
1
.
9
1
±
4
1
6
0
.
4
1

2
8
4
6
.
7
9
±
9
9
3
0
.
7
0
3

2
0
8
0
.
0
±
2
0
5
0
.
0

7
4
1
1
.
0
±
6
8
6
1
.
0

5
4
4
5
.
4
±
6
5
7
4
.
9

6
5
1
0
.
0
±
3

E
6
0
8
8
.
9

8
2
3
0
.
0
±
1
3
3
0
.
0

6
6
0
0
.
1
±
3
0
8
9
.
1

5
2
7
3
.
0
±
9
4
4
2
.
0

6
4
8
7
.
3
±
1
0
8
1
.
4

5
9
7
2
.
9
4
±
2
7
5
4
.
9
1
1

2
4
3
0
.
0
±
2
8
1
0
.
0

3
9
0
3
.
0
±
7
5
0
4
.
0

4
8
4
5
.
6
1
±
4
3
4
4
.
2
3

.

9
6
2
5
0
±
0
3
9
2
0

.

.

2
2
5
3
3
3
±
3
5
2
7
5
3

.

.

7
5
6
0
1
9
0
1
±
5
4
4
2
4
0
2
2

.

.

.

9
6
0
0
0
±
3
5
0
0
.
0

5
8
1
0
0
±
7
0
3
0
.
0

.

7
2
5
1
6
1
3
±
9
0
7
6
.
9
5
5

6
2
5
9
.
2
±
7
8
2
7
.
4

5
4
7
6
.
8
4
±
8
5
8
6
.
0
1
1

5
0
2
7
.
6
5
9
±
3
9
6
0
.
1
4
1
3

.

0
0
±
0
0

.

.

.

4
6
2
8
6
2
±
0
0
8
6
1
3

1
1
1
6
0
3
±
0
0
0
6
6
7

.

.

.

0
0
±
0
.
0

.

8
8
6
0
1
±
0
0
6
7
.
0

.

6
8
9
9
0
1
±
0
0
2
5
.
3
2

2
1
2
6
.
3
±
0
0
8
0
.
4

3
8
7
9
.
5
5
±
0
0
6
1
.
1
4
1

8
0
3
3
.
2
1
1
±
0
0
2
3
.
9
3
4

0
2
3
0
.
0
±
3

E
2
1
8
9
.
9

7
5
4
0
.
5
±
9
5
7
6
.
5

9
3
8
8
.
2
8
1
±
7
7
3
9
.
7
7
4

0
.
0
±
0
.
0

9
9
7
9
.
4
±
0
0
0
4
.
4

5
8
3
7
.
2
2
±
0
0
0
2
.
5
6

9
7
8
6
.
0
±
5
0
5
5
.
0

7
1
5
1
.
1
±
0
1
5
0
.
2

0
8
6
9
.
8
6
2
±
6
4
3
8
.
8
4
8

0
.
0
±
0
.
0

6
6
6
3
.
0
±
0
0
6
1
.
0

4
7
8
1
.
3
±
0
0
0
6
.
8

.

.

.

8
1
0
5
0
±
9
8
2
9
0

2
8
1
4
1
±
6
1
5
0
6

7
5
1
2
1
±
4
0
5
1
9

.

.

.

.

.

.

9
2
8
6
0
±
6
2
4
8
.
1

0
4
6
5
0
±
2
1
1
5
.
3

6
0
8
3
0
±
4
3
5
7
.
3

8
0
4
7
.
2
±
5
6
5
1
.
8

3
7
0
5
.
1
±
2
2
5
0
.
7
1

7
0
2
7
.
0
±
6
7
0
2
.
9
1

0
2
6
8
.
0
±
6
7
4
7
.
2

4
2
1
9
.
0
±
8
2
9
0
.
4

0
7
5
2
.
6
±
6
3
8
6
.
0
1

1
7
6
8
.
3
±
0
9
4
7
.
9

5
0
8
1
.
2
±
6
7
4
8
.
1
1

7
6
7
5
.
1
±
3
8
8
7
.
2
1

5
1
9
0
.
0
±
7
3
4
0
.
0

1
1
9
4
.
0
±
1
1
6
4
.
0

8
2
6
7
.
4
8
±
8
1
5
0
.
3
4
2

0
.
0
±
0
.
0

9
4
2
3
.
0
±
0
0
2
1
.
0

8
9
9
0
.
1
±
0
0
2
5
.
2

1
5
8
0
.
1
±
4
4
6
7
.
2

2
9
3
7
.
0
±
8
0
2
6
.
3

4
9
6
3
.
0
±
6
1
2
2
.
4

.

.

8
2
1
1
0
±
3
1
3
1
0

7
3
9
2
4
±
9
9
5
2
5

.

.

.

0
5
6
6
6
±
1
8
0
1
7
1

.

.

.

.

1
0
4
0
0
±
2
0
6
0
.
0

2
3
4
1
0
±
7
9
3
5
.
0

9
5
1
0
1
±
7
3
2
7
.
3

7
1
5
0
.
1
±
8
3
7
0
.
2

1
6
1
7
.
1
1
±
1
5
8
2
.
3
3

1
4
2
8
.
0
2
±
0
2
1
7
.
7
1
1

6
1
2
.
0
±
1
5
8
4
.
0

6
8
9
2
.
0
±
8
1
1
1
.
1

7
1
6
4
.
4
±
9
3
5
2
.
4
1

6
7
6
0
.
0
±
4
9
4
1
.
0

8
0
7
1
.
0
±
3
4
1
6
.
0

6
6
4
0
.
0
±
3
2
3
1
.
1

5
3
0
0
.
0
±
0
1
4
1
.
0

2
0
3
2
.
0
±
2
9
0
6
.
0

7
3
8
0
.
0
±
5
8
1
1
.
1

.

6
1
5
0
0
±
4
7
9
0
0

.

.

7
8
4
0
5
6
±
4
9
2
1
4
0
1

.

.

6
8
6
3
3
0
3
±
1
3
9
7
9
1
7

.

.

0
0
6
0
2
±
4
1
8
4
2

.

.

8
6
9
2
6
±
6
7
8
1
1
2

.

.

6
7
3
1
3
1
±
7
3
7
7
3
8

.

.

2
4
1
0
0
±
1
3
1
0
.
0

.

5
4
3
5
7
2
±
3
4
5
1
.
7
6

.

1
8
1
1
6
5
3
±
7
4
8
1
.
8
0
7

0
6
1
1
.
0
±
1
2
2
1
.
0

7
1
2
7
.
5
5
±
3
1
7
1
.
3
1
1

0
3
8
2
.
4
8
8
±
0
7
6
2
.
6
7
8
1

1
7
6
0
.
0
±
2
3
7
0
.
0

9
4
1
6
.
4
3
±
0
3
9
0
.
6
6

8
1
8
2
.
9
5
1
±
6
0
9
6
.
3
5
3

1
6
6
2
.
0
±
9
3
4
2
.
0

3
1
5
3
.
7
4
±
5
6
2
9
.
8
8

1
8
1
9
.
9
5
1
±
4
5
9
2
.
1
1
3

5
5
0
1
.
0
±
9
1
2
1
.
0

0
3
4
4
.
0
3
±
9
8
0
3
.
3
4

3
0
7
2
.
5
6
±
3
0
6
2
.
4
0
2

.

5
3
8
9
1
±
2
4
5
3
.
2

.

4
0
8
3
4
±
7
9
7
0
.
1
2

.

0
5
0
7
3
1
±
4
4
1
4
.
2
7

4
3
1
0
.
2
±
2
0
8
1
.
4

5
3
3
3
.
8
±
4
2
9
8
.
1
3

3
1
0
2
.
3
1
±
4
4
4
9
.
3
2
1

9
5
4
0
.
2
±
9
4
5
7
.
3

2
0
9
7
.
7
±
4
1
3
5
.
2
2

5
8
5
9
.
6
1
±
0
6
0
0
.
6
7

6
8
2
4
.
2
±
5
5
7
3
.
3

9
5
9
6
.
5
±
0
7
2
6
.
8
1

9
4
9
7
.
6
±
7
6
4
8
.
0
6

1
6
9
4
.
1
±
4
7
2
1
.
3

1
0
2
9
.
3
±
6
8
7
5
.
8
1

5
3
5
4
.
1
1
±
9
8
9
4
.
9
5

d

2

5

0
1

2

5

0
1

2

5

0
1

2

5

0
1

2

5

0
1

2

5

0
1

2

5

0
1

2

5

0
1

m
e
je
b
o
r
P.

1
F

2
F

3
F

4
F

5
F

6
F

7
F

8
F

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

/

e
d
toi
e
v
c
o
un
r
t
je
c
e

p
d

je

F
/

/

/

/

/

3
1
1
5
3
2
0
7
1
9
3
2
e
v
c
o
_
un
_
0
0
3
1
6
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

7
1
4

.

4
0
.
2

2
9
.
5

8
3
.
3

2
4
.
3

8
0
.
2

k
n
un
r

e
g
un
r
e
v
UN

Evolutionary Computation Volume 31, Nombre 1

An Uncertainty Measure for Prediction of Non-Gaussian Process Surrogates

Chiffre 4: Comparison of UMP and DUM on convergence curves for F 1, F 5, and F 8 sur
2d, respectivement.

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

Chiffre 5: Comparison of UMP and DUM on convergence curves for F 1, F 5, and F 8 sur
5d, respectivement.

/

/

e
d
toi
e
v
c
o
un
r
t
je
c
e

p
d

je

F
/

/

/

/

/

3
1
1
5
3
2
0
7
1
9
3
2
e
v
c
o
_
un
_
0
0
3
1
6
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

Chiffre 6: Comparison of UMP and DUM on convergence curves for F 1, F 5, and F 8 sur
10d, respectivement.

approximating the exact function values than DUM, according to the assumption of
random field model as Eq. (10). Donc, the proposed UMP has better performance
in the measurement of uncertainty for the prediction of RBFN than DUM. The validity
of the proposed UMP also is investigated in Figure 7b, and it shows that fLCB of UMP
is a better approximation to the lower bound for the exact function values than that
of DUM. The DUM method indicates the crowded degree of the neighborhood of a
solution. Thus it cannot accurately measure the uncertainty of prediction of RBFN.

4.4 Comparison on Ensemble Surrogates

An experiment is carried out to show the effectiveness of the proposed UMP on ensem-
ble surrogates case. The proposed method is compared with the uncertainty method
Uens in Eq. (7) and method VUM in Eq. (8), respectivement. Tableau 4 presents the average

Evolutionary Computation Volume 31, Nombre 1

65

C. Hu, S. Zeng, and C. Li

Chiffre 7: (un) Illustration of UMP and DUM methods on a 1-d toy example with F 8 func-
tion; the number of training points 100 is considered; red curve represents the exact
fonction; green dashed curve represents the prediction from RBFN; shaded regions rep-
resent the confidence interval of the prediction of RBFN; (b) y − fLCB plot on F 8 dans
single surrogate RBFN case.

best fitness values obtained by UMP/ensemble, Uens/ensemble, and VUM/ensemble
algorithms on a set of test problems. Figures 8, 9, et 10 present the comparison of con-
vergence curves of different algorithms on F 1, F 5, and F 8 with different dimensions,
respectivement.

From Table 4 and Figures 8–10, the UMP/ensemble significantly outperforms
Uens/ensemble and VUM/ensemble on most test problems. In the results, the per-
formance of VUM/ensemble is worst. The performance of UMP/ensemble and
Uens/ensemble is similar on F 4 dans 2 et 5 dimensions, and Uens/ensemble is slightly bet-
ter than UMP/ensemble on F 6, F 7, and F 8 dans 5 dimension. Cependant, UMP/ensemble
is worse than Uens/ensemble on F 8 dans 10 dimension, probably because the number
of computational budget F Es = 100 is too small. Most of the reevaluation solutions
are consumed due to a large degree of uncertainty, resulting in less opportunity for
UMP/ensemble to exploit the search space sufficiently.

The effectiveness is further analyzed on ensemble surrogates case in Figure 11. Dans
the figure, the ensemble of RBFN, QP, and SVM surrogates is used as regression function
to approximate the exact F 8 fonction, and the UMP, VUM, and Uens methods are used
to estimate the uncertainty of prediction of the ensemble, respectivement. From Figure 11a,
there is a significant error between the prediction values of the ensemble and the exact
fonction. The proposed UMP is best to approximate the error, and the uncertainty of
the ensemble from VUM and Uens methods is similar in general. Besides, the values
obtained by UMP are the best approximation of the exact function values than VUM
and Uens in general, according to the assumption of random field model as Eq. (10).
Figure 11b also verifies fLCB of UMP is a best approximation to the lower bound for the
exact function values.

Regarding the uncertainty method Uens, it is maximum difference among the out-
puts of the ensemble members for the prediction of a solution. En substance, this method
describes the prediction difference among ensemble members in a solution. De même,
for the method VUM, the uncertainty of prediction of a solution is measured by the
variance of the prediction output by the base surrogates of the ensemble. This method
indicates the average squared deviation of the base surrogates about the output of
the ensemble surrogates. From the probability and statistic viewpoint, these methods

66

Evolutionary Computation Volume 31, Nombre 1

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

/

e
d
toi
e
v
c
o
un
r
t
je
c
e

p
d

je

F
/

/

/

/

/

3
1
1
5
3
2
0
7
1
9
3
2
e
v
c
o
_
un
_
0
0
3
1
6
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

An Uncertainty Measure for Prediction of Non-Gaussian Process Surrogates

Tableau 4: Comparing the averages fitness values (shown as Avg ± Std) of UMP/
ensemble, Uens/ensemble, and VUM/ensemble. Statistically significant results evalu-
ated using a Friedman test.

Problem d

UMP/ensemble

VUM/ensemble

Uens/ensemble

F1

F2

F3

F4

F5

F6

F7

F8

2
5
10

2
5
10

2
5
10

2
5
10

2
5
10

2
5
10

2
5
10

2
5
10

7.7161E-5 ± 1.5035E-4
0.0011 ± 0.0012
1.0012 ± 0.7830

1.3604 ± 1.2453
12.1513 ± 8.6543
23.5616 ± 9.8029

2.0820E-4 ± 2.2511E-4
0.0526 ± 0.0614
33.0773 ± 25.3703

3.7257 ± 2.9982
77.1360 ± 38.0107
284.9453 ± 95.0346

9.3382E-3 ± 0.0197
0.1555 ± 0.0832
9.3609 ± 3.5025

0.0810 ± 0.1031
2.6106 ± 2.0893
141.5993 ± 40.6508

4.1281E-4 ± 5.1014E-4
0.1005 ± 0.1455
537.8770 ± 150.8156

3.4762 ± 2.0919
28.3199 ± 11.8856
1204.2153 ± 381.1870

0.6851 ± 1.3881
2.5193 ± 1.5458
889.4860 ± 271.9350

0.0 ± 0.0
0.1200 ± 0.3249
3.2400 ± 1.9448

2.7002 ± 1.1667
3.5092 ± 0.7333
5.5079 ± 0.5751

0.0872 ± 0.0544
0.6518 ± 0.2003
1.0563 ± 0.1378

1.3200 ± 1.3181
12.600 ± 11.0381
21.0000 ± 8.4332

10.3965 ± 4.9964
13.8540 ± 2.8998
13.1730 ± 0.9270

0.6791 ± 0.3800
1.3107 ± 0.3293
1.3724 ± 0.3217

0.0 ± 0.0
0.1200 ± 0.3249
10.2000 ± 4.8249

3.8192 ± 1.8471
8.4735 ± 1.5095
10.8165 ± 1.1127

0.1351 ± 0.0789
0.6159 ± 0.1419
1.4022 ± 0.3057

0.0076 ± 0.0090
63.8693 ± 30.8029
235.0830 ± 119.8799

0.1469 ± 0.1503
81.6944 ± 35.7565
585.8805 ± 231.1875

0.2916 ± 0.2351
59.3588 ± 38.5356
334.1423 ± 157.4397

1.7132 ± 1.1547
19.9698 ± 5.5287
69.4359 ± 7.8951

3.3034 ± 1.8626
24.4794 ± 5.4224
74.3970 ± 11.7897

2.7242 ± 1.3798
17.1605 ± 4.8970
58.7404 ± 8.5256

Average rank

1.27

2.83

1.90

Chiffre 8: Comparison of UMP, Uens and VUM on convergence curves for F 1, F 5, et
F 8 on 2d, respectivement.

Evolutionary Computation Volume 31, Nombre 1

67

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

/

e
d
toi
e
v
c
o
un
r
t
je
c
e

p
d

je

F
/

/

/

/

/

3
1
1
5
3
2
0
7
1
9
3
2
e
v
c
o
_
un
_
0
0
3
1
6
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

C. Hu, S. Zeng, and C. Li

Chiffre 9: Comparison of UMP, Uens and VUM on convergence curves for F 1, F 5, et
F 8 on 5d, respectivement.

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

Chiffre 10: Comparison of UMP, Uens and VUM on convergence curves for F 1, F 5, et
F 8 on 10d, respectivement.

/

/

e
d
toi
e
v
c
o
un
r
t
je
c
e

p
d

je

F
/

/

/

/

/

3
1
1
5
3
2
0
7
1
9
3
2
e
v
c
o
_
un
_
0
0
3
1
6
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

Chiffre 11: (un) Illustration of UMP, Uens and VUM methods on a 1-d toy example with
F 8 fonction; the number of training points 100 is considered; red curve represents the
exact function; green dashed curve represents the ensemble prediction from the weight
sum of prediction of RBFN, QP and SVM; shaded regions represent confidence interval
of the ensemble prediction; (b) y − fLCB plot on F 8 in ensemble surrogates case.

measuring the uncertainty of prediction can be insufficient. Donc, these compared
methods are potentially unreliable in measuring the uncertainty of the prediction of
surrogates.

5 Conclusion

This article mainly addresses the issue that there is no theoretical method to measure
the uncertainty of prediction of Non-GP surrogates. This article proposes a theoretical

68

Evolutionary Computation Volume 31, Nombre 1

An Uncertainty Measure for Prediction of Non-Gaussian Process Surrogates

method to measure the uncertainty. In this method, a stationary random field with a
known zero mean is used to measure the uncertainty of prediction of Non-KGP sur-
rogates. The method’s effectiveness has been verified based on some experiments in
single and ensemble surrogates cases. The experimental results demonstrate that the
proposed method is more promising than other methods on a set of test problems.

Although the performance of the proposed method is competitive, the computa-
tional cost of this method is higher than others. The computational cost of UMP, DUM,
Uens and VUM is O(N 3), Ô(N ), Ô(1), and O(1), respectivement. N is the number of training
samples. Ainsi, the main limitation of the method is that it needs more computational
cost in measuring the uncertainty for the prediction of Non-GP surrogates. Donc,
our future work is to solve this drawback using transfer learning.

Remerciements

This work was supported in part by the National Natural Science Foundation of China
under Grants 62076226, in part by the Fundamental Research Funds for the Central
Universities China University of Geosciences (Wuhan) under Grant CUGGC02, et en
part by the 111 project under Grant B17040.

Les références

Branke, J., and Schmidt, C. (2005). Faster convergence by means of fitness estimation. Soft Com-

puting, 9(1):13–20. 10.1007/s00500-003-0329-4

Brun, J., Krettek, J., Hoffmann, F., and Bertram, T. (2009). Multi-objective optimization with
controlled model assisted evolution strategies. Evolutionary Computation, 17(4):577–593.
10.1162/evco.2009.17.4.17408

Chugh, T., Jin, Y., Miettinen, K., Hakanen, J., and Sindhya, K. (2016). A surrogate-assisted
reference vector guided evolutionary algorithm for computationally expensive many-
IEEE Transactions on Evolutionary Computation, 22(1):129–142.
objective optimization.
10.1109/TEVC.2016.2622301

Chugh, T., Sindhya, K., Hakanen, J., and Miettinen, K. (2019). A survey on handling computa-
tionally expensive multiobjective optimization problems with evolutionary algorithms. Soft
Computing, 23(9):3137–3166. 10.1007/s00500-017-2965-0

Coelho, R.. F., and Bouillard, P.. (2011). Multi-objective reliability-based optimization with stochas-

tic metamodels. Evolutionary Computation, 19(4):525–560. 10.1162/EVCO_a_00034

Cristianini, N., and Shawe-Taylor, J.. (2000). An introduction to support vector machines and other

kernel-based learning methods. Cambridge: la presse de l'Universite de Cambridge.

Emmerich, M., Giannakoglou, K., and Naujoks, B. (2006). Single-and multiobjective evolutionary
optimization assisted by Gaussian random field metamodels. IEEE Transactions on Evolution-
ary Computation, 10(4):421–439. 10.1109/TEVC.2005.859463

Emmerich, M., Giotis, UN., Özdemir, M., Bäck, T., and Giannakoglou, K. (2002). Metamodel-
assisted evolution strategies. Dans 2002 International Conference on Parallel Problem Solving from
Nature, pp. 361–370.

Gal, Y., and Ghahramani, Z. (2015). Dropout as a Bayesian approximation: Representing model
uncertainty in deep learning. Dans 2015 International Conference on Machine Learning, pp. 1050–
1059.

Guo, D., Jin, Y., Ding, J., and Chai, T. (2018). Heterogeneous ensemble-based infill criterion for
evolutionary multiobjective optimization of expensive problems. IEEE Transactions on Cy-
bernetics, 49(3):1012–1025. 10.1109/TCYB.2018.2794503

Evolutionary Computation Volume 31, Nombre 1

69

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

/

e
d
toi
e
v
c
o
un
r
t
je
c
e

p
d

je

F
/

/

/

/

/

3
1
1
5
3
2
0
7
1
9
3
2
e
v
c
o
_
un
_
0
0
3
1
6
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

C. Hu, S. Zeng, and C. Li

Hart, E., Ross, P., and Nelson, J.. (1998). Solving a real-world problem using an evolving heuristi-
cally driven schedule builder. Evolutionary Computation, 6(1):61–80. 10.1162/evco.1998.6.1.61

Hutter, F., Kotthoff, L., and Vanschoren, J.. (2019). Automated machine learning: Methods, systèmes,

challenges. Berlin: Springer-Nature.

Jin, Oui. (2011). Surrogate-assisted evolutionary computation: Recent advances and future chal-
lenges. Swarm and Evolutionary Computation, 1(2):61–70. 10.1016/j.swevo.2011.05.001

Jin, Y., Olhofer, M., and Sendhoff, B. (2000). On evolutionary optimization with approximate fit-
ness functions. Dans 2000 IEEE Genetic and Evolutionary Computation Conference (GECCO), pp.
786–793.

Jin, Y., Wang, H., Chugh, T., Guo, D., and Miettinen, K. (2018). Data-driven evolutionary optimiza-
tion: An overview and case studies. IEEE Transactions on Evolutionary Computation, 23(3):442–
458. 10.1109/TEVC.2018.2869001

Jones, D. R., Schonlau, M., and Welch, W. J.. (1998). Efficient global optimization of expensive
black-box functions. Journal of Global Optimization, 13(4):455–492. 10.1023/UN:1008306431147

Lewis, R.. M., Torczon, V., and Trosset, M.. W. (2000). Direct search methods: Then and now. Journal
of Computational and Applied Mathematics, 124(1-2):191–207. 10.1016/S0377-0427(00)00423-4

Li, C., Lequel, S., and Yang, M.. (2014). An adaptive multi-swarm optimizer for dynamic optimiza-

tion problems. Evolutionary Computation, 22(4):559–594. 10.1162/EVCO_a_00117

Liu, B., Chen, Q., Zhang, Q., Liang, J., Suganthan, P.. N., and Qu, B. (2014). Problem definitions
and evaluation criteria for computational expensive optimization. Dans 2014 IEEE Congress on
Evolutionary Computation, pp. 2081–2088.

Liu, Y., Yao, X., and Higuchi, T. (2000). Evolutionary ensembles with negative correlation learning.

IEEE Transactions on Evolutionary Computation, 4(4):380–387. 10.1109/4235.887237

MacQueen, J.. (1967). Some methods for classification and analysis of multivariate observations.
Dans 1967 5th Berkeley Symposium on Mathematical Statistics and Probability, pp. 281–297.

Michalewicz, Z., and Schoenauer, M.. (1996). Evolutionary algorithms for constrained parameter
optimization problems. Evolutionary Computation, 4(1):1–32. 10.1162/evco.1996.4.1.1

Pedregosa, F., Varoquaux, G., Gramfort, UN., Michel, V., Thirion, B., Grisel, O., Blondel, M., Pretten-
hofer, P., Blanc, R., and Dubourg, V. (2011). Scikit-learn: Machine learning in Python. Journal
of Machine Learning Research, 12:2825–2830.

Preen, R.. J., and Bull, L. (2016). Design mining interacting wind turbines. Evolutionary Computa-

tion, 24(1):89–111. 10.1162/EVCO_a_00144

Queipo, N. V., Haftka, R.. T., Shyy, W., Goel, T., Vaidyanathan, R., and Tucker, P.. K. (2005).
Surrogate-based analysis and optimization. Progress in Aerospace Sciences, 41(1):1–28.
10.1016/j.paerosci.2005.02.001

Sprecher, D. UN. (1993). A universal mapping for Kolmogorov’s superposition theorem. Neural

Networks, 6(8):1089–1094. 10.1016/S0893-6080(09)80020-8

Stein, M.. (1987). Large sample properties of simulations using Latin hypercube sampling. Technologie-

nometrics, 29(2):143–151. 10.1080/00401706.1987.10488205

Tong, H., Huang, C., Liu, J., and Yao, X. (2019). Voronoi-based efficient surrogate-assisted evolu-
tionary algorithm for very expensive problems. Dans 2019 IEEE Congress on Evolutionary Com-
putation, pp. 1996–2003.

Torczon, V., and Trosset, M.. W. (1998). Using approximations to accelerate engineering design
optimization. Dans 1998 7th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis
and Optimization, p. 4800.

70

Evolutionary Computation Volume 31, Nombre 1

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

/

e
d
toi
e
v
c
o
un
r
t
je
c
e

p
d

je

F
/

/

/

/

/

3
1
1
5
3
2
0
7
1
9
3
2
e
v
c
o
_
un
_
0
0
3
1
6
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

An Uncertainty Measure for Prediction of Non-Gaussian Process Surrogates

Ulmer, H., Streichert, F., and Zell, UN. (2003). Evolution strategies assisted by Gaussian processes
with improved preselection criterion. Dans 2003 IEEE Congress on Evolutionary Computation,
pp. 692–699.

Wang, H., Jin, Y., and Doherty, J.. (2017). Committee-based active learning for surrogate-assisted
particle swarm optimization of expensive problems. IEEE Transactions on Cybernetics,
47(9):2664–2677. 10.1109/TCYB.2017.2710978

Wang, H., Jin, Y., and Jansen, J.. Ô. (2016). Data-driven surrogate-assisted multiobjective evo-
lutionary optimization of a trauma system. IEEE Transactions on Evolutionary Computation,
20(6):939–952. 10.1109/TEVC.2016.2555315

Wang, W., Liu, H.-L., and Tan, K. C. (2022). A surrogate-assisted differential evolution algorithm
for high-dimensional expensive optimization problems. IEEE Transactions on Cybernetics, 1–
13. 10.1109/TCYB.2022.3175533

Zhan, D., and Xing, H. (2021). A fast Kriging-assisted evolutionary algorithm based on in-
cremental learning. IEEE Transactions on Evolutionary Computation, 25(5):941–955. 10.1109/
TEVC.2021.3067015

Zhang, F., Mei, Y., Nguyen, S., Zhang, M., and Tan, K. C. (2021). Surrogate-assisted evolutionary
multitask genetic programming for dynamic flexible job shop scheduling. IEEE Transactions
on Evolutionary Computation, 25(4):651–665. 10.1109/TEVC.2021.3065707

Zhang, M., Li, H., Pan, S., Lyu, J., Ling, S., and Su, S. (2021). Convolutional neural networks
based lung nodule classification: A surrogate-assisted evolutionary algorithm for hyperpa-
rameter optimization. IEEE Transactions on Evolutionary Computation, 25(5):869–882. 10.1109/
TEVC.2021.3060833

je

D
o
w
n
o
un
d
e
d

F
r
o
m
h

t
t

p

:
/
/

d
je
r
e
c
t
.

m

je
t
.

/

/

e
d
toi
e
v
c
o
un
r
t
je
c
e

p
d

je

F
/

/

/

/

/

3
1
1
5
3
2
0
7
1
9
3
2
e
v
c
o
_
un
_
0
0
3
1
6
p
d

.

F

b
oui
g
toi
e
s
t

t

o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3

Evolutionary Computation Volume 31, Nombre 1

71
Télécharger le PDF