Examples
This package provides different tools for optimization. Hence, this section gives different examples for using the implemented Metaheuristics.
Single-Objective Optimization
Firstly import this package
using MetaheuristicsNow, let us define the objective function to be minimized:
f(x) = 10length(x) + sum( x.^2 - 10cos.(2π*x) )f (generic function with 1 method)The search space (a.k.a. box-constraints) can be defined as follows:
bounds = boxconstraints(lb = -5ones(10), ub = 5ones(10))BoxConstrainedSpace{Float64}([-5.0, -5.0, -5.0, -5.0, -5.0, -5.0, -5.0, -5.0, -5.0, -5.0], [5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0], [10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0], 10, true)boxconstraints is an alias for BoxConstrainedSpace. The matrix format bounds = [-5ones(10) 5ones(10)]' is also supported but using named parameters is recommended for clarity.
It is possible to provide some information on the minimization problem. Let's provide the true optimum to stop the optimizer when a tolerance f_tol is satisfied.
information = Information(f_optimum = 0.0)Information(0.0, Float64[])Generic options or settings (e.g. budget limitation, tolerances, etc) can be provided as follows:
options = Options(f_calls_limit = 9000*10, f_tol = 1e-5, seed=1)Options
=======
  rng:             Random.TaskLocalRNG()
  seed:            1
  x_tol:           1.0e-8
  f_tol:           1.0e-5
  g_tol:           0.0
  h_tol:           0.0
  debug:           false
  verbose:         false
  f_tol_rel:       2.220446049250313e-16
  time_limit:      Inf
  iterations:      0
  f_calls_limit:   90000.0
  store_convergence: false
  parallel_evaluation: false
Now, we can provide the Information and Options to the optimizer (ECA in this example).
algorithm = ECA(information = information, options = options)Algorithm Parameters
====================
  ECA(η_max=2.0, K=7, N=0, N_init=0, p_exploit=0.95, p_bin=0.02, ε=0.0, adaptive=false, resize_population=false)
Optimization Result
===================
  Empty status.
Now, the optimization is performed as follows:
result = optimize(f, bounds, algorithm)Optimization Result =================== Iteration: 540 Minimum: 1.98992 Minimizer: [4.9425e-09, -2.63681e-09, 5.26807e-10, …, -2.05997e-09] Function calls: 37800 Total time: 0.1317 s Stop reason: Due to Convergence Termination criterion.
The minimum and minimizer:
minimum(result)1.9899181141865938minimizer(result)10-element Vector{Float64}:
  4.942500864198351e-9
 -2.6368055210774052e-9
  5.268065838332036e-10
 -2.221590569601794e-9
  2.268687428475678e-9
 -0.9949586341971358
  1.364299749866016e-9
  0.994958642923819
  8.33735329614517e-10
 -2.0599681591713486e-9Constrained Optimization
It is common that optimization models include constraints that must be satisfied. For example: The Rosenbrock function constrained to a disk
Minimize:
\[{\displaystyle f(x,y)=(1-x)^{2}+100(y-x^{2})^{2}}\]
subject to:
\[{\displaystyle x^{2}+y^{2}\leq 2}\]
where $-2 \leq x,y \leq 2$.
In Metaheuristics.jl, a feasible solution is such that $g(x) \leq 0$ and $h(x) \approx 0$. Hence, in this example the constraint is given by $g(x) = x^2 + y^2 - 2 \leq 0$. Moreover, the equality and inequality constraints must be saved into  Arrays.
In this package, if the algorithm was not designed for constrained optimization, then solutions with the lower constraint violation sum will be preferred.
julia> using Metaheuristicsjulia> function f(x) x,y = x[1], x[2] fx = (1-x)^2+100(y-x^2)^2 gx = [x^2 + y^2 - 2] # inequality constraints hx = [0.0] # equality constraints # order is important return fx, gx, hx endf (generic function with 1 method)julia> bounds = boxconstraints(lb = [-2.0, -2.0], ub = [2.0, 2.0])BoxConstrainedSpace{Float64}([-2.0, -2.0], [2.0, 2.0], [4.0, 4.0], 2, true)julia> optimize(f, bounds, ECA(N=30, K=3))Optimization Result =================== Iteration: 667 Minimum: 0.00184604 Minimizer: [0.957093, 0.915803] Function calls: 20010 Feasibles: 30 / 30 in final population Total time: 0.9726 s Stop reason: Maximum objective function calls exceeded.
Multiobjective Optimization
To implement a multiobjective optimization problem and solve it, you can proceed as usual. Here, you need to provide constraints if they exist, otherwise put gx = [0.0]; hx = [0.0]; to indicate an unconstrained multiobjective problem.
julia> using UnicodePlots # to visualize in console (optional)julia> using Metaheuristicsjulia> function f(x) # objective functions v = 1.0 + sum(x .^ 2) fx1 = x[1] * v fx2 = (1 - sqrt(x[1])) * v fx = [fx1, fx2] # constraints gx = [0.0] # inequality constraints hx = [0.0] # equality constraints # order is important return fx, gx, hx endf (generic function with 1 method)julia> bounds = boxconstraints(lb = zeros(30), ub = ones(30))BoxConstrainedSpace{Float64}([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 … 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0 … 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0], [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0 … 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0], 30, true)julia> optimize(f, bounds, NSGA2())Optimization Result =================== Iteration: 251 Non-dominated: 100 / 100 ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀F space⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ┌────────────────────────────────────────┐ 2 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ f₂ │⡆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ │⣇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ │⠘⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ │⠀⠘⠢⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ │⠀⠀⠀⠈⠓⠤⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ │⠀⠀⠀⠀⠀⠀⠀⠁⠓⠒⠤⢄⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠑⠒⠠⠠⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ 0 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠓⠒⠤⠤⣀⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ └────────────────────────────────────────┘ ⠀0⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀3⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀f₁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ Function calls: 50100 Feasibles: 100 / 100 in final population Total time: 0.4247 s Stop reason: Maximum objective function calls exceeded.
Bilevel Optimization
Bilevel optimization problems can be solved by using the package BilevelHeuristics.jl which extends  Metaheuristics.jl for handling those hierarchical problems.
Defining objective functions corresponding to the BO problem.
Upper level (leader problem):
using BilevelHeuristics
F(x, y) = sum(x.^2) + sum(y.^2)
bounds_ul = boxconstraints(lb = -ones(5), ub = ones(5)) Lower level (follower problem):
f(x, y) = sum((x - y).^2) + y[1]^2
bounds_ll = boxconstraints(lb = -ones(5), ub = ones(5))Approximate solution:
res = optimize(F, f, bounds_ul, bounds_ll, BCA())Output:
+=========== RESULT ==========+
  iteration: 108
    minimum: 
          F: 4.03387e-10
          f: 2.94824e-10
  minimizer: 
          x: [-1.1460768817533927e-5, 7.231706879604178e-6, 3.818596951258517e-6, 2.294324313691869e-6, 1.8770952450067828e-6]
          y: [1.998748659975197e-6, 9.479307908087866e-6, 6.180041276047425e-6, -7.642051857319683e-6, 2.434166021682429e-6]
    F calls: 2503
    f calls: 5062617
    Message: Stopped due UL function evaluations limitations. 
 total time: 26.8142 s
+============================+See BilevelHeuristics documentation for more information.
Decision-Making
Although Metaheuristics is focused on the optimization part, some decision-making algorithms are available in this package (see Multi-Criteria Decision-Making).
The following example shows how to perform a posteriori decision-making.
julia> # load the problem
julia> f, bounds, pf = Metaheuristics.TestProblems.ZDT1();
julia> # perform multi-objective optimization
julia> res = optimize(f, bounds, NSGA2());
julia> # user preferences
julia> w = [0.5, 0.5];
julia> # set the decision-making algorithm
julia> dm_method = CompromiseProgramming(Tchebysheff())
julia> # find the best decision
julia> sol = best_alternative(res, w, dm_method)
(f = [0.38493217206706115, 0.38037042164979956], g = [0.0], h = [0.0], x = [3.849e-01, 7.731e-06, …, 2.362e-07])Providing Initial Solutions
Sometimes you may need to use the starter solutions you need before the optimization process begins, well, this example illustrates how to do it.
julia> f(x) = abs(x[1]) + x[2] + x[3]^2 # objective functionf (generic function with 1 method)julia> algo = ECA(N = 61); # optimizer # one solution can be providedjulia> x0 = [0.5, 0.5, 0.5];julia> set_user_solutions!(algo, x0, f); # or multiple solutions can be givenjulia> X0 = rand(30, 3); # 30 solutions with dim 3julia> set_user_solutions!(algo, X0, f);julia> optimize(f, boxconstraints(lb = [0, 0, 0], ub = [1, 1, 1]), algo)ERROR: MethodError: Cannot `convert` an object of type Metaheuristics.xf_solution{Array{Int64,1}} to an object of type Metaheuristics.xf_solution{Array{Float64,1}} Closest candidates are: convert(::Type{T}, ::T) where T @ Base Base.jl:84 (::Type{Metaheuristics.xf_solution{X}} where X)(::Any, ::Any) @ Metaheuristics ~/work/Metaheuristics.jl/Metaheuristics.jl/src/solutions/individual.jl:7
Batch Evaluation
Evaluating multiple solutions at the same time can reduce computational time. To do that, define your function on an input N x D matrix and function values into matrices with outcomes in rows for all N solutions. Also, you need to put parallel_evaluation=true in the Options to indicate that your f is prepared for parallel evaluations.
f(X) = begin
    fx = sum(X.^2, dims=2)       # objective function ∑x²
    gx = sum(X.^2, dims=2) .-0.5 # inequality constraints ∑x² ≤ 0.5
    hx = zeros(0,0)              # equality constraints
    fx, gx, hx
end
options = Options(parallel_evaluation=true)
res = optimize(f, boxconstraints(lb = -10ones(5), ub = 10ones(5)), ECA(options=options))See Parallelization tutorial for more details.
Modifying an Existing Metaheuristic
You may need to modify one of the implemented metaheuristics to improve the algorithm performance or test new mechanisms. This example illustrates how to do it.
Be cautious when modifying a metaheuristic due to those changes will overwrite the default method for that metaheuristic.
Let's assume that we want to modify the stop criteria for ECA. See Contributing  for more details.
using Metaheuristics
import LinearAlgebra: norm
# overwrite method
function Metaheuristics.stop_criteria!(
        status,
        parameters::ECA, # It is important to indicate the modified Metaheuristic 
        problem,
        information,
        options,
        args...;
        kargs...
    )
    if status.stop
        # nothing to do
        return
    end
    # Diversity-based stop criteria
    x_mean = zeros(length(status.population[1].x))
    for sol in status.population
        x_mean += sol.x
    end
    x_mean /= length(status.population)
    
    distances_mean = sum(sol -> norm( x_mean - sol.x ), status.population)
    distances_mean /= length(status.population)
    # stop when solutions are close enough to the geometrical center
    new_stop_condition = distances_mean <= 1e-3
    status.stop = new_stop_condition
    # (optional and not recommended) print when this criterium is met
    if status.stop
        @info "Diversity-based stop criterium"
        @show distances_mean
    end
    return
end
f, bounds, opt = Metaheuristics.TestProblems.get_problem(:sphere);
optimize(f, bounds, ECA())
Combinatorial Optimization Examples
Traveling Salesman Problem (TSP)
The TSP seeks the shortest route visiting all cities exactly once and returning to the starting city.
using Metaheuristics
# Distance matrix for 5 cities
distances = [
    0.0  10.0  15.0  20.0  25.0;
    10.0  0.0  35.0  25.0  30.0;
    15.0 35.0   0.0  30.0  20.0;
    20.0 25.0  30.0   0.0  15.0;
    25.0 30.0  20.0  15.0   0.0
]
# Objective: minimize tour length
function tsp_cost(tour)
    n = length(tour)
    total = sum(distances[tour[i], tour[i+1]] for i in 1:n-1)
    total += distances[tour[end], tour[1]]  # return to start
    return total
end
# Solve using GA with order crossover
n_cities = 5
population_size = 100
ga = GA(
    N = population_size,
    initializer = RandomPermutation(N=population_size),
    crossover = OrderCrossover(), # crossover for permutation encoding
    mutation = SlightMutation()
)
result = optimize(tsp_cost, PermutationSpace(n_cities), ga)
best_tour = minimizer(result)
tour_length = minimum(result)
println("Best tour: ", best_tour)
println("Tour length: ", tour_length)Knapsack Problem
The knapsack problem is a classic combinatorial optimization problem where the goal is to maximize the value of items that can be carried in a knapsack with a limited capacity. For example, if you have a knapsack with a capacity of 7 kg and you have the following items in the table:
| Item | Profit | Weight (kg) | 
|---|---|---|
| gold | 1000 | 1 | 
| silver | 500 | 2 | 
| bronze | 250 | 4 | 
| laptop | 800 | 3 | 
| camera | 600 | 2 | 
| necklace | 300 | 1 | 
| watch | 400 | 1 | 
The knapsack problem is to find the items that maximize the profit while respecting the capacity constraint.
using Metaheuristics
# define items and capacity
items = Dict(
    "gold" => (profit=1000, weight=1),
    "silver" => (profit=500, weight=2),
    "bronze" => (profit=250, weight=4),
    "laptop" => (profit=800, weight=3),
    "camera" => (profit=600, weight=2),
    "necklace" => (profit=300, weight=1),
    "watch" => (profit=400, weight=1)
)
capacity = 7
names = collect(keys(items))
profit = [items[k].profit for k in names]
weight = [items[k].weight for k in names]
# load the test problem using the provided parameters
# by default, the problem is encoded as a permutation
f, search_space, optimum = Metaheuristics.TestProblems.knapsack(profit, weight, capacity)
population_size = 100
ga = GA(
    N = population_size,
    initializer = RandomPermutation(N=population_size),
    crossover = OrderCrossover(), # crossover for permutation encoding
    mutation = SlightMutation()
)
# Solve with GA
result = optimize(f, search_space, ga)
selected_items = minimizer(result)[1:findfirst(w -> w > capacity, cumsum(weight[minimizer(result)]))-1]
# table with the selected items
println("Item \t Weight \t Value")
for i in selected_items
    println("$(names[i]) \t $(weight[i]) \t $(profit[i])")
end
println("Total value: ", -minimum(result))  # negative because we minimize
Comparing Multiple Algorithms
Compare performance of different algorithms on the same problem:
using Metaheuristics
# Define problem
f(x) = sum(x.^2)
bounds = boxconstraints(lb = -10ones(5), ub = 10ones(5))
# Test multiple algorithms
algorithms = [
    ("ECA", ECA()),
    ("DE", DE()),
    ("PSO", PSO()),
    ("ABC", ABC())
]
println("Algorithm Performance Comparison")
println("-" ^ 50)
for (name, algo) in algorithms
    result = optimize(f, bounds, algo)
    println("$name:")
    println("  Minimum: $(minimum(result))")
    println("  F-calls: $(result.f_calls)")
    println("  Time: $(result.final_time - result.start_time) s")
    println()
endCustom Stopping Criteria Example
Implement diversity-based stopping:
using Metaheuristics
import LinearAlgebra: norm
# Overwrite stop criteria for ECA
function Metaheuristics.stop_criteria!(
        status,
        parameters::ECA,
        problem,
        information,
        options,
        args...;
        kargs...
    )
    
    if status.stop
        return
    end
    
    # Calculate population diversity
    if length(status.population) > 1
        positions = [sol.x for sol in status.population]
        center = sum(positions) / length(positions)
        avg_distance = sum(p -> norm(p - center), positions) / length(positions)
        
        # Stop when population converges
        if avg_distance < 1e-4
            status.stop = true
            status.stop_msg = "Low population diversity"
        end
    end
    
    return
end
# Now use ECA with custom stopping
f(x) = sum(x.^2)
bounds = boxconstraints(lb = -5ones(10), ub = 5ones(10))
result = optimize(f, bounds, ECA())
println(result.stop_msg)