Performance Indicators
Metaheuristics.jl includes performance indicators to assess evolutionary optimization algorithms performance.
Available indicators:
Note that in Metaheuristics.jl
, minimization is always assumed. Therefore these indicators have been developed for minimization problems.
Metaheuristics.PerformanceIndicators
— ModulePerformanceIndicators
This module includes performance indicators to assess evolutionary multi-objective optimization algorithms.
gd
Generational Distance.igd
Inverted Generational Distance.gd_plus
Generational Distance plus.igd_plus
Inverted Generational Distance plus.covering
Covering indicator (C-metric).hypervolume
Hypervolume indicator.
Example
julia> import Metaheuristics: PerformanceIndicators, TestProblems
julia> A = [ collect(1:3) collect(1:3) ]
3×2 Array{Int64,2}:
1 1
2 2
3 3
julia> B = A .- 1
3×2 Array{Int64,2}:
0 0
1 1
2 2
julia> PerformanceIndicators.gd(A, B)
0.47140452079103173
julia> f, bounds, front = TestProblems.get_problem(:ZDT1);
julia> front
F space
┌────────────────────────────────────────┐
1 │⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
│⠄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
│⠈⠄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
│⠀⠈⢆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
│⠀⠀⠀⠢⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
│⠀⠀⠀⠀⠈⠢⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
│⠀⠀⠀⠀⠀⠀⠉⠢⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
f_2 │⠀⠀⠀⠀⠀⠀⠀⠀⠈⠑⢤⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠲⢄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠒⢤⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠙⠢⢄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠑⠢⢄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠢⠤⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠑⠢⢤⣀⠀⠀⠀⠀⠀│
0 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠒⠢⢄⣀│
└────────────────────────────────────────┘
0 1
f_1
julia> PerformanceIndicators.igd_plus(front, front)
0.0
Generational Distance
Metaheuristics.PerformanceIndicators.gd
— Functiongd(front, true_pareto_front; p = 1)
Returns the Generational Distance.
Parameters
front
and true_pareto_front
can be:
N×m
matrix whereN
is the number of points andm
is the number of objectives.State
Array{xFgh_indiv}
(usuallyState.population
)
Generational Distance Plus
Metaheuristics.PerformanceIndicators.gd_plus
— Functiongd_plus(front, true_pareto_front; p = 1)
Returns the Generational Distance Plus.
Parameters
front
and true_pareto_front
can be:
N×m
matrix whereN
is the number of points andm
is the number of objectives.State
Array{xFgh_indiv}
(usuallyState.population
)
Inverted Generational Distance
Metaheuristics.PerformanceIndicators.igd
— Functionigd(front, true_pareto_front; p = 1)
Returns the Inverted Generational Distance.
Parameters
front
and true_pareto_front
can be:
N×m
matrix whereN
is the number of points andm
is the number of objectives.State
Array{xFgh_indiv}
(usuallyState.population
)
Inverted Generational Distance Plus
Metaheuristics.PerformanceIndicators.igd_plus
— Functionigd_plus(front, true_pareto_front; p = 1)
Returns the Inverted Generational Distance Plus.
Parameters
front
and true_pareto_front
can be:
N×m
matrix whereN
is the number of points andm
is the number of objectives.State
Array{xFgh_indiv}
(usuallyState.population
)
Spacing Indicator
Metaheuristics.PerformanceIndicators.spacing
— Functionspacing(A)
Computes the Schott spacing indicator. spacing(A) == 0
means that vectors in A
are uniformly distributed.
Covering Indicator ($C$-metric)
Metaheuristics.PerformanceIndicators.covering
— Functioncovering(A, B)
Computes the covering indicator (percentage of vectors in B that are dominated by vectors in A) from two sets with non-dominated solutions.
A and B with size (n, m) where n is number of samples and m is the vector dimension.
Note that covering(A, B) == 1
means that all solutions in B are dominated by those in A. Moreover, covering(A, B) != covering(B, A)
in general.
If A::State
and B::State
, then computes covering(A.population, B.population)
after ignoring dominated solutions in each set.
Hypervolume
Metaheuristics.PerformanceIndicators.hypervolume
— Functionhypervolume(front, reference_point)
Computes the hypervolume indicator, i.e., volume between points in front
and reference_point
.
Note that each point in front
must (weakly) dominates to reference_point
. Also, front
is a non-dominated set.
If front::State
and reference_point::Vector
, then computes hypervolume(front.population, reference_point)
after ignoring solutions in front
that do not dominate reference_point
.
Examples
Computing hypervolume indicator from vectors in a Matrix
julia> import Metaheuristics.PerformanceIndicators: hypervolume
julia> f1 = collect(0:10); # objective 1
julia> f2 = 10 .- collect(0:10); # objective 2
julia> front = [ f1 f2 ]
11×2 Array{Int64,2}: 0 10 1 9 2 8 3 7 4 6 5 5 6 4 7 3 8 2 9 1 10 0
julia> reference_point = [11, 11]
2-element Array{Int64,1}: 11 11
julia> hv = hypervolume(front, reference_point)
66.0
Now, let's compute the hypervolume implementation in Julia from the result of NSGA3
when solving DTLZ2 test problem.
julia> using Metaheuristics
julia> import Metaheuristics.PerformanceIndicators: hypervolume
julia> import Metaheuristics: TestProblems, get_non_dominated_solutions
julia> f, bounds, true_front = TestProblems.DTLZ2();
julia> result = optimize(f, bounds, NSGA3());
julia> approx_front = get_non_dominated_solutions(result.population)
100-element Array{Metaheuristics.xFgh_solution{Array{Float64,1}},1}: (f = [0.2373236881368023, 0.23633334722831115, 0.9490231796476964], g = [0.0], h = [0.0], x = [7.840e-01, 4.987e-01, …, 5.047e-01]) (f = [0.9973832296020685, 0.09836124763289159, 0.0003863415701685024], g = [0.0], h = [0.0], x = [2.454e-04, 6.258e-02, …, 5.051e-01]) (f = [0.8117536281449264, 0.586880082091629, 0.0011605001294531481], g = [0.0], h = [0.0], x = [7.376e-04, 3.985e-01, …, 5.021e-01]) (f = [0.8928157774379172, 0.2704370926686181, 0.37493765407247737], g = [0.0], h = [0.0], x = [2.433e-01, 1.872e-01, …, 5.064e-01]) (f = [0.09846335956498514, 0.09787049056731492, 0.9933582033135419], g = [0.0], h = [0.0], x = [9.116e-01, 4.981e-01, …, 5.150e-01]) (f = [0.709221238101764, 0.5642889244924439, 0.43524839576481916], g = [0.0], h = [0.0], x = [2.850e-01, 4.279e-01, …, 4.972e-01]) (f = [0.9743231054916646, 0.21328826797763834, 0.10507501731307088], g = [0.0], h = [0.0], x = [6.682e-02, 1.372e-01, …, 5.008e-01]) (f = [0.9333672737406643, 0.3495248429109829, 0.12093809438342712], g = [0.0], h = [0.0], x = [7.687e-02, 2.281e-01, …, 5.029e-01]) (f = [0.10365194413261154, 0.9748889412156141, 0.20802130717692738], g = [0.0], h = [0.0], x = [1.331e-01, 9.326e-01, …, 5.022e-01]) (f = [0.20192835050102687, 0.9926884728951355, 0.0013555443331417394], g = [0.0], h = [0.0], x = [8.519e-04, 8.722e-01, …, 5.139e-01]) ⋮ (f = [0.4834985890999094, 0.28914521310759717, 0.8427212322695031], g = [0.0], h = [0.0], x = [6.249e-01, 3.431e-01, …, 5.033e-01]) (f = [0.8078911404317173, 0.2647496401830739, 0.5340977148234455], g = [0.0], h = [0.0], x = [3.571e-01, 2.016e-01, …, 5.150e-01]) (f = [0.6896585426884653, 0.6794438663682131, 0.26784987536124527], g = [0.0], h = [0.0], x = [1.718e-01, 4.953e-01, …, 5.243e-01]) (f = [0.0019621267067586696, 0.7314671681716374, 0.6843144714257288], g = [0.0], h = [0.0], x = [4.788e-01, 9.983e-01, …, 4.876e-01]) (f = [0.7859750799424859, 0.6182435937648919, 0.08462809925102494], g = [0.0], h = [0.0], x = [5.375e-02, 4.243e-01, …, 5.043e-01]) (f = [0.17341768993246165, 0.024369665252792758, 0.9876745825353226], g = [0.0], h = [0.0], x = [8.883e-01, 8.888e-02, …, 4.997e-01]) (f = [0.09953763444566453, 0.09683768483284957, 0.9936573917040395], g = [0.0], h = [0.0], x = [9.116e-01, 4.912e-01, …, 5.150e-01]) (f = [0.6205452777493057, 0.7748056094297073, 0.1308751746712755], g = [0.0], h = [0.0], x = [8.345e-02, 5.701e-01, …, 5.224e-01]) (f = [0.6599173179265619, 0.1280224448641125, 0.7460515158764294], g = [0.0], h = [0.0], x = [5.331e-01, 1.220e-01, …, 5.027e-01])
julia> reference_point = nadir(result.population)
3-element Array{Float64,1}: 1.005528582043122 1.0084944965625733 1.001722863814578
julia> hv = hypervolume(approx_front, reference_point)
0.4232314666549783
$\Delta_p$ (Delta $p$)
Metaheuristics.PerformanceIndicators.deltap
— Functiondeltap(front, true_pareto_front; p = 1)
Δₚ(front, true_pareto_front; p = 1)
Returns the averaged Hausdorff distance indicator aka Δₚ (Delta p).
"Δₚ
" can be typed as \Delta<tab>\_p<tab>
.
Parameters
front
and true_pareto_front
can be:
N×m
matrix whereN
is the number of points andm
is the number of objectives.Array{xFgh_indiv}
(usuallyState.population
)
$\varepsilon$-Indicator
Unary and binary $\varepsilon$-indicator (epsilon-indicator). Details in E. Zitzler, L. Thiele, M. Laumanns, C.M. Fonseca, V.G. da Fonseca (2003)
Metaheuristics.PerformanceIndicators.epsilon_indicator
— Functionepsilon_indicator(A, B)
Computes the ε-indicator for non-dominated sets A
and B
. It is assumed that all values in A
and B
are positive. If negative, the sets are translated to positive values.
Interpretation
epsilon_indicator(A, PF)
is unary ifPF
is the Pareto-optimal front.epsilon_indicator(A, B) == 1
none is better than the other.epsilon_indicator(A, B) < 1
means that A is better than B.epsilon_indicator(A, B) > 1
means that B is better than A.- Values closer to 1 are preferable.
Examples
julia> A1 = [4 7;5 6;7 5; 8 4.0; 9 2];
julia> A2 = [4 7;5 6;7 5; 8 4.0];
julia> A3 = [6 8; 7 7;8 6; 9 5;10 4.0 ];
julia> PerformanceIndicators.epsilon_indicator(A1, A2)
1.0
julia> PerformanceIndicators.epsilon_indicator(A1, A3)
0.9
julia> f, bounds, pf = Metaheuristics.TestProblems.ZDT3();
julia> res = optimize(f, bounds, NSGA2());
julia> PerformanceIndicators.epsilon_indicator(res, pf)
1.00497701620997