Dear Qian Yanfei,
That depends on what you want to show. If you want to show that the parameter values found work well only on those benchmark functions, then yes, use the same, but I would argue that in such case, why not tune for each specific function since you are fixing the functions and you do not expect the parameter values to work well on functions different from those?
If you want to find good parameters values for continuous optimization functions in general, then you need to show that the parameter values found during tuning also work well for functions that you did not use in the tuning.
If your assumption is that what works well for the benchmark should work well for other continuous optimization functions, then using other functions for testing should show that parameter values tuned for the benchmark set should also give good results when using a different test set. Otherwise, your assumption is wrong.
Irace cannot tell you what you want to do (yet ;-) ), it just (hopefully) enables you to do what you want to do :-)
I hope this helps!
Manuel.
--
Dr Manuel López-Ibáñez | "Beatriz Galindo" Senior Distinguished Researcher | University of Málaga, Spain |
http://lopez-ibanez.eu------------------------------------------------------------------------------
Evolutionary Computation Journal: Special Issue on Reproducibility:
http://lopez-ibanez.eu/ecj-si-rep------------------------------------------------------------------------------
Workshop on Space & AI in association with ECML/PKDD 2021 (deadline: July 30th)
http://spaceandai.ijs.si------------------------------------------------------------------------------
11th Workshop on Evolutionary Computation for the Automated Design of Algorithms (ECADA):
https://bonsai.auburn.edu/ecada/GECCO2021------------------------------------------------------------------------------