Hi!
I am currently writing a master's thesis based on OptaPlanner. I have several planners/configs to compare and many problem instances. So the experiments take a very long time to run. It occurred to me that, if I want to use different termination criteria, I can look at the CSV files that track the best score and "simulate" a run (assuming the new criteria result in a lower max time). In other words, I can ask what would have happened if... This has saved me a huge amount of time! But there is one annoying wrinkle. I realized that the CSV files do not record the full score. The number of uninitialized variables is omitted. Anyway, I have two questions.
1. What's the reasoning behind omitting the init level? (Just curious.)
2. I am considering writing a more robust tool for simulating benchmark runs. (Right now it's a pretty simple and limited Python program.) Is this something that sounds like it would be of interest to anyone else? I'm thinking it would produce a real benchmark report just like the normal benchmarker, and possibly selectively re-run those experiments that cannot be simulated because the original running time was too short. (I suspect I'm not gonna bother if nobody else wants to use it :P) Of course, if anyone else wants to take the idea and run with it, be my guest. ;)
/Christoffer