Dear All., For collegial info a new kid on the block in SpineOpt in Julia is in the literature! Nice work from KU Leuven, KTH-Royal Institute of Technology, Energy Reform Ltd., University College Dublin & VTT. Best, Mark
Hi all
Just seeking clarification on names and projects.
I take it that the Spine model became the SpineOpt model — as follows:
Kouveliotis-Lysikatos, Iasonas, Manuel Marin, Jon Olason, Mikael Amelin, and Lennart Söder (2020). A network aggregation tool for the energy system modelling framework Spine. IEEE. In 2020 International Conference on Smart Energy Systems and Technologies (SEST), pages 1–6.
Ihlemann, Maren, Iasonas Kouveliotis-Lysikatos, Jiangyi Huang, Joseph Dillon, Ciara O'Dwyer, Topi Rasku, Manuel Marin, Kris Poncelet, and Juha Kiviluoma (1 September 2022). "SpineOpt: a flexible open-source energy system modelling framework". Energy Strategy Reviews. 43: 100902. ISSN 2211-467X. doi:10.1016/j.esr.2022.100902.
And that the following model also named Spine is completely unrelated:
There are other namespace collisions or near‑collisions in our
domain too — for instance, GENESYS and GENeSYS‑MOD are entirely
different projects.
Being from a relatively non‑aligned standpoint, I do hear people
comment on model proliferation within our domain. Whether the
consolidation that will undoubtedly occur should be left to
develop naturally — or could and should be more actively managed
is a tricky question of course. I suppose some overlapping
projects could agree to pool resources and merge onto one or other
codebase?
with best wishes, Robbie
Dear All., For collegial info a new kid on the block in SpineOpt in Julia is in the literature! Nice work from KU Leuven, KTH-Royal Institute of Technology, Energy Reform Ltd., University College Dublin & VTT. Best, Mark
--
You received this message because you are subscribed to the Google Groups "openmod initiative" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openmod-initiat...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/openmod-initiative/DB9PR04MB8235017383061A52651D783FF8969%40DB9PR04MB8235.eurprd04.prod.outlook.com.
-- Robbie Morrison Address: Schillerstrasse 85, 10627 Berlin, Germany Phone: +49.30.612-87617
Dear Robbie, all,
A follow-up on that last comment you made here: about model ‘proliferation’ and ‘consolidation’. As a non-energy-modeller but as someone with a lot of experience in climate science where there are some analogies, I think this is a really interesting discussion to have in this community.
A few years ago (~2015/16), I tried to gather a small group to raise funding to set up an intercomparison project somewhat analogous to the CMIP programme in climate modelling. Sadly that proposal never quite emerged for various reasons (including ill health)… but partly I suspect because it was simply too early for our community to take on such an effort (fewer well-developed open source models).
To explain: for those unfamiliar with CMIP, it is a major international project where all the ‘major’ climate models agree to perform a common set of experiments such that models (and their future projections) can be compared. It is very closely connected with the IPCC report cycles and most IPCC headline results feature statements indicating the extent to which model projection agree/disagree. Note the comparison /not/ a beauty-contest to identify which model is ‘best’ – there is simply no single climate model that is ‘best’ for all purposes. The intercomparision is, however, a way to develop confidence about which climate change signals are more robust, which are least certain, and how models can be improved.
The challenges for energy models are somewhat different but I think a similar exercise could still be worthwhile. It would need careful thought but could be a powerful way to move forwards as a community. In particular, how does one design a good set of ‘experiments’ (i.e., specifying a common set of boundary condition inputs to the model such as permitted grid topologies, tech/price assumptions, included/excluded processes etc)? Do we consider the solver/optimiser to be part of the model (if we run the same model with two different solvers do we get the same answer)? What outputs should we compare? How do we compare across models that include or exclude particular technologies/features, or use different timestepping? Where do we put the dataset and how do we make it available for everyone to access?
Such an intercomparision – if it could be appropriately designed, performed and analysed – might then inform future model development and whether model proliferation itself ‘useful’ (as it is sampling some meaningful form of uncertainty) or ‘problematic’ (simply adding to confusion and replicating effort). And – as a nod to the ‘open science’ ethos of openmod – it’d be great to see a dataset like this emerging as a community resource for all to use/analyse/probe.
Would be interested to hear if anyone has thoughts on this… including ‘it is a daft idea because X’!
Best wishes,
David
---
David Brayshaw
Professor of Climate Science and Energy Meteorology
Department of Meteorology
University of Reading, UK
RG6 6BB
Group webpage: https://research.reading.ac.uk/met-energy/
Personal webpage: https://research.reading.ac.uk/meteorology/people/david-brayshaw/
Next Generation Challenges in Energy Climate Modelling Workshop: https://research.reading.ac.uk/met-energy/next-generation-challenges-workshop/next-generation-energy-climate-modelling-2022/
Next Generation Challenges in Energy Climate Modelling Webinars: https://research.reading.ac.uk/met-energy/next-generation-challenges-workshop/webinar/
My working hours may not coincide with yours, please only respond to this email in your normal working hours.
To view this discussion on the web, visit https://groups.google.com/d/msgid/openmod-initiative/d45a4441-8a51-55a1-c8fb-5493c5583abd%40posteo.de.