- modellers to choose the best solver for their analysis,
- funders to track the performance improvements of solvers over time,
- solver developers in addressing problems that matter to you!
Hi Max, all
Just two thoughts.Ā Perhaps the survey could be modified appropriately?
Constructing the constraint matrix is a major overhead too.Ā Do you need to talk about this function and often bottleneck too?
In the interests of disclosure, should
you not mention that some of the benchmark platform developers
are aligned with HiGHS?Ā The zenodo link you provided is a
funding application.Ā I think such a platform being independent
is vital.Ā Are there potential conflicts of interest?Ā And, if
so, how do you propose to address these?
with best wishes, Robbie
--
You received this message because you are subscribed to the Google Groups "openmod initiative" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openmod-initiat...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/openmod-initiative/44ec5142-d26b-402e-9192-34bb7b0bd53cn%40googlegroups.com.
-- Robbie Morrison Address: Schillerstrasse 85, 10627 Berlin, Germany Phone: +49.30.612-87617
Thanks Max, I acknowledge my COI
question was rather direct, appreciate the answer, kia ora,
Robbie
Hi Robbie,
Good to hear from you, and great questions.
1) When solving a model, we spend time in the solver + solver interface, though often time spent in solver >> time spent in solver interface (e.g. for fast ones like JuMP, Linopy, CVXPY, PyOptInterface). For example, in case of capacity expansion models we spend nowadays about 99% in the solver (often C++) and 1% in the solver interface (often Python, Julia, GAMS). Tracking the solver performance is often the most important and always very informative for energy modellers. Further, tracking the solver performance (HiGHS, glpk) for all energy models is relatively easy. To achieve this, we must collect a set of .lp/.mps files from the community. In contrast, tracking the solver interface performance (e.g. JuMP, Linopy) for each model would require building and solving the problems through each model, exponentially complicating the required effort when we want to make things reproducible. We might still track the solver interface performance in a few cases. This can be especially useful for models and applications that use the "callback" functions in the solver interface, which interrupts the solving process to reformulate the optimization problem to speed up runs. The GenX team (capacity expansion modelling) and the Sienna team (production cost modelling) are working with methods that require this.
2) The article is just there to introduce people to why optimization solver performance matters. The team working on the benchmark platform is fully hosted at the non-profit Open Energy Transition. There will be no conflict of interest and the benchmark website will be open-source. Sorry, could have been clearer.
I hope that helps.
Best wishes,
Max
--
Maximilian Parzen
CEO of Open Energy TransitionĀ ā A Non-Profit Tackling Energy Planning Challenges Worldwide
PhD in Energy System Modelling | Energy Storage | University of Edinburgh
Follow me on LinkedIn | Twitter | GitHub
You received this message because you are subscribed to a topic in the Google Groups "openmod initiative" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/openmod-initiative/9amspOzz7DI/unsubscribe.
To unsubscribe from this group and all its topics, send an email to openmod-initiat...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/openmod-initiative/7a1fc82a-fff4-48d0-876c-18f4d2ca6c1f%40posteo.de.