Commit aee39742 authored by Julien Lin's avatar Julien Lin
Browse files

Merge branch 'master' of

parents a583a3f0 28462bf6
......@@ -122,6 +122,9 @@ Relationship to Metropolis-Hastings algorithm.
Sampling in a parametrized approximation of the objective function
(i.e. from uniform to Dirac(s)).
Main question: how to choose the temperature bounds?
Evolutionary Algorithms
......@@ -188,6 +191,19 @@ How to ensure convergence?
Ant Colony Algorithms
def new_ant(other_ants, pheromones):
parameters = estimate(other_ants, pheromones)
ant = sample(parameters, pheromones)
return ant
What kind of algorithm family is it?
Covariance Matrix Adaptation Evolutionary Strategies
Problem modelization
......@@ -211,6 +227,13 @@ Main models
> - multi-modal,
> - multi-objectives (cf. Pareto optimality).
![A diagram with colored points and lines](/docs/figures/Pareto_optimality.svg
"Pareto optimality --- (c) Johann Dreo (Yes, I put it on Wikipedia,
too).")*Two points A and B are said "non-dominated" when they are better than each other
on one objective f1 or f2, the point C being "dominated". The set of
non-dominated point is the Pareto front (red line).*
Constraints management
......@@ -220,6 +243,29 @@ Constraints management
> - reparation,
> - generation.
"Feasible" solutions honor the constraints, "unfeasible" ones do not.
Constraint management approaches falls in three categories:
- Penalization: the objective function is responsible of spotting unfeasible
solutions and should ensure that their value is worse than their feasible
- Generation: variation operators are responsible for always producing solutions
that are feasible.
- Reparation: a specialized operator is responsible for taking an infeasible
solution and making it feasible.
Penalization is the easiest to implement and the more used in practice,
but it can heavily alter the objective function viewed by the algorithm.
For instance, it can make it hard for the algorithm to find solutions that are
located on the bounds of the feasible domain.
Generation and reparation can introduce a bias in the search, which can be
either bad (if the bias is against generating good solutions)
or good (if it implements a heuristic toward good solutions)
or midly bad (if the heuristic is too strong).
Performance evaluation
......@@ -313,7 +359,8 @@ Empirical evaluation
> Use robust estimators: median instead of mean, Inter Quartile Range instead of standard deviation.
## Expected Empirical Cumulative Distribution Functions
## Expected Run Time Empirical Cumulative Distribution Functions
> On Run Time: ERT-ECDF.
......@@ -330,7 +377,16 @@ Empirical evaluation
> The dual of the ERT-ECDF can be easily computed for quality (EQT-ECDF).
> 3D ERT/EQT-ECDF may be useful for terminal comparison.
> 2D ECDF of trajectories
> --- called the Empirical Attainment Function (EAF) ---
> may be useful for terminal comparison, as it is a
> generalization of the two ECDFs above.
![A diagram showing the progression from single evaluation of solution's
quality to the more generic QT-EAF.](/docs/figures/QTEAF_construction.svg)*The
Empirical Attainment Function is a generalization of an ECDF of the
value trajectories, i.e. the generic view on a stochastic solver's
## Other tools
This diff is collapsed.
This diff is collapsed.
......@@ -23,7 +23,7 @@ def to_sensors(sol):
def cover_sum(sol, domain_width, sensor_range, dim):
"""Compute the coverage quality of the given vector."""
assert(0 < sensor_range <= domain_width * math.sqrt(2))
assert(0 < sensor_range <= math.sqrt(2))
assert(0 < domain_width)
assert(dim > 0)
assert(len(sol) >= dim)
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment