Workshop on Black-Box Optimization Benchmarking

24 views
Skip to first unread message

Tobias Glasmachers

unread,
Mar 3, 2021, 4:09:26 PM3/3/21
to coseal
Dear colleagues,

We cordially invite you to participate in the

WORKSHOP ON BLACK-BOX OPTIMIZATION BENCHMARKING (BBOB 2021)
held as part of the

2021 Genetic and Evolutionary Computation Conference (GECCO-2021)
July 10–14, Lille, France
http://gecco-2021.sigevo.org

which will take place in online/virtual mode.

Submission deadline: April 12, 2021
Notification: April 26, 2021
Camera-ready: May 3, 2021
registration deadline: May 3, 2021



Benchmarking of optimization algorithms is a crucial part in their
design and
application in practice. The Comparing Continuous Optimizers platform
(COCO,
https://github.com/numbbo/coco) has been developed in the past decade to
support algorithm developers and practitioners alike by automating
benchmarking experiments for blackbox optimization algorithms in single- and
bi-objective, unconstrained continuous problems in exact and noisy, as
well as
expensive and non-expensive scenarios.

For the 11th Blackbox Optimization Benchmarking workshop (BBOB 2021), we
plan
to widen our focus towards mixed-integer benchmark problems. Concretely, we
highly encourage submissions describing the benchmarking results from
blackbox
optimization algorithms on the single-objective bbob-mixint and the
bi-objective bbob-biobj-mixint suites previously released at GECCO-2019.

Any other submission discussing other aspects of (blackbox) benchmarking,
especially on the other available bbob, bbob-noisy, bbob-biobj, and
bbob-largescale test suites are welcome as well. We encourage particularly
submissions about algorithms from outside the evolutionary computation
community and papers analyzing the large amount of already publicly
available
algorithm data of COCO (see https://numbbo.github.io/data-archive/).

Like for the previous editions of the workshop, we will provide source
code in
various languages (C/C++, Matlab/Octave, Java, and Python) to benchmark
algorithms on the various test suites mentioned. Postprocessing data and
comparing algorithm performance will be equally automatized with COCO (up to
already prepared ACM-compliant LaTeX templates for writing the workshop
papers).

For details, please visit http://numbbo.github.io/workshops/BBOB-2021/

We are looking forward to your submissions and participation!
The BBOBies
Reply all
Reply to author
Forward
0 new messages