This is the benchmarking platform for Iterative Optimization Heuristics (IOHs).
- Documentation: https://arxiv.org/abs/1810.05281
- Wiki page: https://iohprofiler.github.io
- General Contact: iohprofiler@liacs.leidenuniv.nl
- Mailing List: https://lists.leidenuniv.nl/mailman/listinfo/iohprofiler
IOHexperimenter provides:
- A framework for straightforward benchmarking of any iterative optimization heuristic
- A suite consisting of 23 pre-made Pseudo-Boolean benchmarking function, with easily accessible methods for adding custom functions and suites
- Logging methods to effortlesly store benchmarking data in a format compatible with IOHanalyzer, with future support for additional data logging options
- (Soon to come:) A framework which significantly simplifies algorithm design
IOHexperimenter is built on:
C++
(tested ongcc 5.4.0
)boost.filesystem
library for logging files.
IOHexperimenter is available for:
C++
R
package https://github.com/IOHprofiler/IOHexperimenter/tree/RPython
interface (soon to come)Java
interface (soon to come)
The IOHexperimenter has been built on C++
and tested on complier gcc 5.4.0
. To use the logging of csv
output files, boost.filesystem
library is required (To install boost library, please visit https://www.boost.org).
If you are using the tool for the first time, please download or clone this branch and run make
at the root directory of the project. After running make
to compile,
- object files will be generated in
build/c/obj
- three exectuable files will be generated in
build/c/bin
Afterwards, you can use the folder build/c
and use the Makefile
therein for your experiments.
For more details of how to use the C++
version, please visit this page.
To use the IOHexperimenter within R
, please visit the R branch of this repository.
Benchmarking problems in IOHexperimenter are easy to create yourself. We provide support for any input type and any number of real-valued objectives. For a more detailed guidline of how to define a benchmarking problem within IOHexperimenter, please visit this page.
Suites are collections of benchmarking problems. By including problems into a suite, it is easier for users to maintain their experiments. If you create a set of similar problems, it is recommended to create a suite to collect them together, which can be done effortlesly within the IOHexperimenter. For detailed steps of creating and using suites, please visit this page.
If you have any questions, comments or suggestions, please don't hesitate contacting us IOHprofiler@liacs.leidenuniv.nl!
- Furong Ye, Leiden Institute of Advanced Computer Science,
- Diederick Vermetten, Leiden Institute of Advanced Computer Science,
- Hao Wang, Leiden Institute of Advanced Computer Science,
- Carola Doerr, CNRS and Sorbonne University,
- Thomas Bäck, Leiden Institute of Advanced Computer Science,
When using IOHprofiler and parts thereof, please kindly cite this work as
Carola Doerr, Hao Wang, Furong Ye, Sander van Rijn, Thomas Bäck: IOHprofiler: A Benchmarking and Profiling Tool for Iterative Optimization Heuristics, arXiv e-prints:1810.05281, 2018.
@ARTICLE{IOHprofiler,
author = {Carola Doerr and Hao Wang and Furong Ye and Sander van Rijn and Thomas B{\"a}ck},
title = {{IOHprofiler: A Benchmarking and Profiling Tool for Iterative Optimization Heuristics}},
journal = {arXiv e-prints:1810.05281},
archivePrefix = "arXiv",
eprint = {1810.05281},
year = 2018,
month = oct,
keywords = {Computer Science - Neural and Evolutionary Computing},
url = {https://arxiv.org/abs/1810.05281}
}