ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

BBOB 2012 - Workshop on Black Box Optimization Benchmarking

Date2012-07-07

Deadline2012-03-28

VenuePhiladelph, USA - United States USA - United States

Keywords

Websitehttp://www.sigevo.org/gecco-2012/workshops.html

Topics/Call fo Papers

Benchmarking of optimization algorithms is crucial to assess performance of optimizers quantitatively, understand weaknesses and strengths of each algorithm and is the compulsory path to test new algorithm designs. The black-box-optimization benchmarking workshop aims at benchmarking both stochastic or deterministic continuous optimization algorithms. The new edition will follow the BBOB 2009 and BBOB 2010 GECCO workshops. Those previous editions have resulted in (1) collecting data of various optimizers (32 in 2009 and 25 in 2010) that are now freely available for the entire community, (2) providing meaningful tools for the visualization of the comparative results, and (3) have established a standard for the benchmarking of algorithms. As a result the BBOB test-suite as well as the results published at the workshops have been used in various publications (independent from the workshop), the benchmarking procedure proposed is becoming a standard and the data collected have started to be used by statisticians to identify and classify properties of algorithms. The impact of the previous edition is visible in the EC community but also in the mathematical optimization community where BBOB results start now to be cited.
With a new edition, we would like to build on the success of BBOB 2009 and BBOB 2010 and increase and diversify the data collection we already have. We believe that the results from previous editions support the task of designing new and better algorithms and hence we expect results from new algorithms to be submitted to the workshop. We will provide essentially the same test-suite as in 2010. However the source code of the test-functions which was available in Matlab, C, and Java will in addition be available in Python. We will also provide new postprocessing tools for comparing more than 2 algorithms. This new edition will entirely ban different parameter settings for different test functions and will encourage analysis that study the impact of parameter setting changes.
Participants are invited to submit a paper with the results of an algorithm of their choice plus comparisons with algorithms from our database. They are also encouraged to use the existing database for statistical analyses or for designing a portfolio of algorithms. When data for all algorithms of the portfolio are available in the database, the performance of the portfolio can be provided by the postprocessing without conducting further experiments. An overall analysis and comparison will be accomplished by the organizers and presented during the workshop together with the single presentations of each participant. Our presentation will in particular address the question, whether a significant over-adaptation of algorithms to the benchmark function set has been taken place during the last years and discuss the perspective of how benchmarks should (co-)evolve.

Last modified: 2012-02-08 14:47:49