133 views

Skip to first unread message

Jul 16, 2021, 4:44:52 AMJul 16

to openmod list, Kempke, Daniel Rehfeldt, Charlie Vanaret

Dear open energy modellers,

Many of our models consist of large linear programs (LP) that need

commercial solvers to get solutions in reasonable time. This is a

barrier for many potential users outside the academic world (companies,

NGOs, etc.) who cannot afford the licences.

Large models also need hardware with e.g. very large RAM (e.g. 256 GB or

more, I've heard of people with 8 TB machines) which can also be

prohibitive for some users.

This paper on the PIPS-IPM++ solver (which we discussed a few years ago

as part of the BEAM-ME project on the list) could be very interesting to

many of you:

"A massively parallel interior-point solver for linear energy system

models with block structure"

by Daniel Rehfeldt, Hannes Hobbie, David Schönheit, Ambros Gleixner,

Thorsten Koch, Dominik Möst

https://doi.org/10.1016/j.ejor.2021.06.063

working paper:

https://nbn-resolving.org/urn:nbn:de:0297-zib-74321

They exploit the block structure of energy system models, where you can

e.g. make blocks of variables and constraints based on e.g. separating

out the time periods, to allow parallel solving of large LPs. The

innovation of PIPS-IPM++ if I understood it correctly is to cleverly

handle the linking constraints between the blocks (e.g. CO2 constraints,

storage, capacity limits) while enabling parallel solving of the blocks.

This is interesting for several reasons:

i) Just from a maths/theory point of view.

ii) Because it can solve even faster than commercial solvers for large

problems (they consider hourly modelling for a year, i.e. 8760 time

periods, for a network with 4000 nodes, 6000 lines, storage+generation).

iii) Almost all of the code, with the exception of the sparse linear

equation system solver PARDISO, is open source. There are OS substitutes

for PARDISO, but you get a performance hit.

For the solving performance, see e.g. this figure for a small problem:

https://twitter.com/davidschoenheit/status/1415267722993315840

and these results for larger problems (where some commercial solvers

failed), where there were speedups of 50 times:

https://twitter.com/davidschoenheit/status/1415267728877838337

There is more detail in this Best Practice Guide from the BEAM-ME project:

https://gitlab.com/beam-me/bpg/-/raw/master/EnergySystemModel_SpeedUp_BestPracticGuide.pdf

The code is online here:

https://github.com/dRehfeldt/PIPS/blob/master/README_PIPSipmpp.md

but a further update will come later in the year.

The installation seems tricky.

Does anyone want to help build an interface for PyPSA or any other model?

Apart from the installation, it seems the main challenge is to mark the

variable and constraint blocks for PIPS (currently done in GAMS).

Is there any reason to be sceptical here? Are REMix folks planning to

implement this in their day-to-day work?

Best wishes,

Tom

https://gitlab.com/beam-me/bpg/-/raw/master/EnergySystemModel_SpeedUp_BestPracticGuide.pdf

--

Tom Brown (he/him)

Professor of Digital Transformation in Energy Systems

Institute of Energy Technology

Technische Universität Berlin

Group website: https://www.ensys.tu-berlin.de/

Personal website: https://nworbmot.org/

Visitor Address:

Einsteinufer 25 (TA 8)

10587 Berlin

Many of our models consist of large linear programs (LP) that need

commercial solvers to get solutions in reasonable time. This is a

barrier for many potential users outside the academic world (companies,

NGOs, etc.) who cannot afford the licences.

Large models also need hardware with e.g. very large RAM (e.g. 256 GB or

more, I've heard of people with 8 TB machines) which can also be

prohibitive for some users.

This paper on the PIPS-IPM++ solver (which we discussed a few years ago

as part of the BEAM-ME project on the list) could be very interesting to

many of you:

"A massively parallel interior-point solver for linear energy system

models with block structure"

by Daniel Rehfeldt, Hannes Hobbie, David Schönheit, Ambros Gleixner,

Thorsten Koch, Dominik Möst

https://doi.org/10.1016/j.ejor.2021.06.063

working paper:

https://nbn-resolving.org/urn:nbn:de:0297-zib-74321

They exploit the block structure of energy system models, where you can

e.g. make blocks of variables and constraints based on e.g. separating

out the time periods, to allow parallel solving of large LPs. The

innovation of PIPS-IPM++ if I understood it correctly is to cleverly

handle the linking constraints between the blocks (e.g. CO2 constraints,

storage, capacity limits) while enabling parallel solving of the blocks.

This is interesting for several reasons:

i) Just from a maths/theory point of view.

ii) Because it can solve even faster than commercial solvers for large

problems (they consider hourly modelling for a year, i.e. 8760 time

periods, for a network with 4000 nodes, 6000 lines, storage+generation).

iii) Almost all of the code, with the exception of the sparse linear

equation system solver PARDISO, is open source. There are OS substitutes

for PARDISO, but you get a performance hit.

For the solving performance, see e.g. this figure for a small problem:

https://twitter.com/davidschoenheit/status/1415267722993315840

and these results for larger problems (where some commercial solvers

failed), where there were speedups of 50 times:

https://twitter.com/davidschoenheit/status/1415267728877838337

There is more detail in this Best Practice Guide from the BEAM-ME project:

https://gitlab.com/beam-me/bpg/-/raw/master/EnergySystemModel_SpeedUp_BestPracticGuide.pdf

The code is online here:

https://github.com/dRehfeldt/PIPS/blob/master/README_PIPSipmpp.md

but a further update will come later in the year.

The installation seems tricky.

Does anyone want to help build an interface for PyPSA or any other model?

Apart from the installation, it seems the main challenge is to mark the

variable and constraint blocks for PIPS (currently done in GAMS).

Is there any reason to be sceptical here? Are REMix folks planning to

implement this in their day-to-day work?

Best wishes,

Tom

https://gitlab.com/beam-me/bpg/-/raw/master/EnergySystemModel_SpeedUp_BestPracticGuide.pdf

--

Tom Brown (he/him)

Professor of Digital Transformation in Energy Systems

Institute of Energy Technology

Technische Universität Berlin

Group website: https://www.ensys.tu-berlin.de/

Personal website: https://nworbmot.org/

Visitor Address:

Einsteinufer 25 (TA 8)

10587 Berlin

Jul 16, 2021, 6:08:37 AMJul 16

to t.b...@tu-berlin.de, openmod-i...@googlegroups.com, kem...@zib.de, rehf...@zib.de, van...@math.tu-berlin.de, Manuel...@dlr.de

Hey all,

First of all, thanks Tom for the advertisement😉

Yes, the solver installation is tricky and the annotation is sometimes exhausting from a modeler's perspective, especially of you want to have it adjustable for different instances/versions of your model. I cannot estimate the real effort for doing this with PyPSA. However, it is definitely possible to develop converter scripts that do the same annotation for Python-based LPs as for GAMS-based ones.

A work-around solution we could offer, or which we currently develop, is a converter script that takes the .nc-files of PyPSA and converts the parameters into REMix-Inputs (btw: a new version of REMix which is ideally supposed to go OpenSource ) which could be then fed into PIPS-IPM++. We did that for PyPSA-Eur so far and, after some debugging, started yesterday with solving the PyPSA-Eur data sets with PIPS. Let's see, if we can solve the 3475 nodes with 8760 time steps...

Best

Kiên

-----Ursprüngliche Nachricht-----

Von: openmod-i...@googlegroups.com <openmod-i...@googlegroups.com> Im Auftrag von Tom Brown

Gesendet: Friday, July 16, 2021 10:45 AM

An: openmod list <openmod-i...@googlegroups.com>

Cc: Kempke <kem...@zib.de>; Daniel Rehfeldt <rehf...@zib.de>; Charlie Vanaret <van...@math.tu-berlin.de>

Betreff: [openmod-initiative] A massively parallel interior-point solver for linear energy system models with block structure

--

You received this message because you are subscribed to the Google Groups "openmod initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to openmod-initiat...@googlegroups.com.

To view this discussion on the web, visit https://groups.google.com/d/msgid/openmod-initiative/20210716084449.BF5F736F92_F14701B%40SPMA-02.tubit.win.tu-berlin.de.

First of all, thanks Tom for the advertisement😉

Yes, the solver installation is tricky and the annotation is sometimes exhausting from a modeler's perspective, especially of you want to have it adjustable for different instances/versions of your model. I cannot estimate the real effort for doing this with PyPSA. However, it is definitely possible to develop converter scripts that do the same annotation for Python-based LPs as for GAMS-based ones.

A work-around solution we could offer, or which we currently develop, is a converter script that takes the .nc-files of PyPSA and converts the parameters into REMix-Inputs (btw: a new version of REMix which is ideally supposed to go OpenSource ) which could be then fed into PIPS-IPM++. We did that for PyPSA-Eur so far and, after some debugging, started yesterday with solving the PyPSA-Eur data sets with PIPS. Let's see, if we can solve the 3475 nodes with 8760 time steps...

Best

Kiên

-----Ursprüngliche Nachricht-----

Von: openmod-i...@googlegroups.com <openmod-i...@googlegroups.com> Im Auftrag von Tom Brown

Gesendet: Friday, July 16, 2021 10:45 AM

An: openmod list <openmod-i...@googlegroups.com>

Cc: Kempke <kem...@zib.de>; Daniel Rehfeldt <rehf...@zib.de>; Charlie Vanaret <van...@math.tu-berlin.de>

Betreff: [openmod-initiative] A massively parallel interior-point solver for linear energy system models with block structure

You received this message because you are subscribed to the Google Groups "openmod initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to openmod-initiat...@googlegroups.com.

To view this discussion on the web, visit https://groups.google.com/d/msgid/openmod-initiative/20210716084449.BF5F736F92_F14701B%40SPMA-02.tubit.win.tu-berlin.de.

Jul 16, 2021, 7:08:34 AMJul 16

to Karl-K...@dlr.de, t.b...@tu-berlin.de, openmod-i...@googlegroups.com, kem...@zib.de, rehf...@zib.de, van...@math.tu-berlin.de

Hi Tom, Hi all,

Exhausted REMix modeller here! First up - currently we cannot employ PIPS-IPM++ in our day-to-day work. This can mainly be attributed to the effort required in adapting the model formulation towards a better convergence behaviour of the solver.

The annotation (marking variables and equations for PIPS) itself is quite straightforward, here a time based decomposition seems to work the best. If you're lucky that's it - give the annotated blocks to PIPS++ and profit from the speed-up. If you're unlucky and your model formulation leads to problematic convergence behaviour it's back to the drawing board - in my case that lead to a complete reimplementation of REMix (and as Kiên announced hopefully soon to be open source).

The underlying problem is not all model instances are created equal - if you only want a model considering an economic dispatch of the power system you have good chances of benefitting from a great speed-up. If you want to consider expansion of power lines and storages we still found a significant speedup of 15x - 18x. Additional consideration of sector integration or transformation pathways make it even more challenging to end up with good convergence behaviour. Commercial solvers seem to be still the best choice for those problems, but I am optimistic PIPS-IPM++ will get there.

Main drawbacks currently are the installation and general application of PIPS++, access to a HPC as a requirement and people familiar with compiling software from source (had to learn that part the hard way). Especially larger universities will have a certain advantage here - so I'm not sure how available PIPS++ will be for Groups without Funding for e.g. commercial solvers. Also an adequate interface from your model to PIPS++ of course. There were some efforts towards an interface with pyomo already on the ZIB side, however I cannot estimate how well that translates to your custom nomopyomo implementation in PyPSA.

In my option there are currently two incentives to still employ parallel solvers despite the above mentioned hurdles. As you already identified correctly switching from the shared memory architecture to a distributed memory architecture allows avoiding the huge memory requirements on shared memory nodes and therefore be able to solve previously unsolvable problems. The other incentive is frequent recalculation or time critical applications such as calculating redispatch measures - here the upfront investment into getting a single model instance to perform reasonably is most likely time well spent.

Also keep in mind that the link to the git repository is not the version of PIPS-IPM++ to be published open source later this year. There is still a lot of legacy code from Argonne included for compatibility reasons making the installation even more complicated.

Best,

Manuel

> -----Ursprüngliche Nachricht-----

> Von: Cao, Karl-Kien

> Gesendet: 16 July 2021 12:09

> An: Tom Brown <t.b...@tu-berlin.de>; openmod list <openmod-

> initi...@googlegroups.com>

> Vanaret <van...@math.tu-berlin.de>; Wetzel, Manuel

> <Manuel...@dlr.de>

> Betreff: AW: [openmod-initiative] A massively parallel interior-point solver

Exhausted REMix modeller here! First up - currently we cannot employ PIPS-IPM++ in our day-to-day work. This can mainly be attributed to the effort required in adapting the model formulation towards a better convergence behaviour of the solver.

The annotation (marking variables and equations for PIPS) itself is quite straightforward, here a time based decomposition seems to work the best. If you're lucky that's it - give the annotated blocks to PIPS++ and profit from the speed-up. If you're unlucky and your model formulation leads to problematic convergence behaviour it's back to the drawing board - in my case that lead to a complete reimplementation of REMix (and as Kiên announced hopefully soon to be open source).

The underlying problem is not all model instances are created equal - if you only want a model considering an economic dispatch of the power system you have good chances of benefitting from a great speed-up. If you want to consider expansion of power lines and storages we still found a significant speedup of 15x - 18x. Additional consideration of sector integration or transformation pathways make it even more challenging to end up with good convergence behaviour. Commercial solvers seem to be still the best choice for those problems, but I am optimistic PIPS-IPM++ will get there.

Main drawbacks currently are the installation and general application of PIPS++, access to a HPC as a requirement and people familiar with compiling software from source (had to learn that part the hard way). Especially larger universities will have a certain advantage here - so I'm not sure how available PIPS++ will be for Groups without Funding for e.g. commercial solvers. Also an adequate interface from your model to PIPS++ of course. There were some efforts towards an interface with pyomo already on the ZIB side, however I cannot estimate how well that translates to your custom nomopyomo implementation in PyPSA.

In my option there are currently two incentives to still employ parallel solvers despite the above mentioned hurdles. As you already identified correctly switching from the shared memory architecture to a distributed memory architecture allows avoiding the huge memory requirements on shared memory nodes and therefore be able to solve previously unsolvable problems. The other incentive is frequent recalculation or time critical applications such as calculating redispatch measures - here the upfront investment into getting a single model instance to perform reasonably is most likely time well spent.

Also keep in mind that the link to the git repository is not the version of PIPS-IPM++ to be published open source later this year. There is still a lot of legacy code from Argonne included for compatibility reasons making the installation even more complicated.

Best,

Manuel

> -----Ursprüngliche Nachricht-----

> Von: Cao, Karl-Kien

> Gesendet: 16 July 2021 12:09

> An: Tom Brown <t.b...@tu-berlin.de>; openmod list <openmod-

> initi...@googlegroups.com>

> Vanaret <van...@math.tu-berlin.de>; Wetzel, Manuel

> <Manuel...@dlr.de>

> Betreff: AW: [openmod-initiative] A massively parallel interior-point solver

> for linear energy system models with block structure

>

>

> Hey all,

>

> First of all, thanks Tom for the advertisement😉

>

> Yes, the solver installation is tricky and the annotation is sometimes

> exhausting from a modeler's perspective, especially of you want to have it

> adjustable for different instances/versions of your model. I cannot estimate

> the real effort for doing this with PyPSA. However, it is definitely possible to

> develop converter scripts that do the same annotation for Python-based LPs

> as for GAMS-based ones.

>

> A work-around solution we could offer, or which we currently develop, is a

> converter script that takes the .nc-files of PyPSA and converts the

> parameters into REMix-Inputs (btw: a new version of REMix which is ideally

> supposed to go OpenSource ) which could be then fed into PIPS-IPM++. We

> did that for PyPSA-Eur so far and, after some debugging, started yesterday

> with solving the PyPSA-Eur data sets with PIPS. Let's see, if we can solve the

> 3475 nodes with 8760 time steps...

>

> Best

> Kiên

>

> -----Ursprüngliche Nachricht-----

> Von: openmod-i...@googlegroups.com <openmod-

> initi...@googlegroups.com> Im Auftrag von Tom Brown
>

> First of all, thanks Tom for the advertisement😉

>

> Yes, the solver installation is tricky and the annotation is sometimes

> exhausting from a modeler's perspective, especially of you want to have it

> adjustable for different instances/versions of your model. I cannot estimate

> the real effort for doing this with PyPSA. However, it is definitely possible to

> develop converter scripts that do the same annotation for Python-based LPs

> as for GAMS-based ones.

>

> A work-around solution we could offer, or which we currently develop, is a

> converter script that takes the .nc-files of PyPSA and converts the

> parameters into REMix-Inputs (btw: a new version of REMix which is ideally

> supposed to go OpenSource ) which could be then fed into PIPS-IPM++. We

> did that for PyPSA-Eur so far and, after some debugging, started yesterday

> with solving the PyPSA-Eur data sets with PIPS. Let's see, if we can solve the

> 3475 nodes with 8760 time steps...

>

> Best

> Kiên

>

> -----Ursprüngliche Nachricht-----

> Von: openmod-i...@googlegroups.com <openmod-

Jul 16, 2021, 9:05:13 AMJul 16

to Manuel...@dlr.de, Karl-K...@dlr.de, openmod-i...@googlegroups.com, kem...@zib.de, rehf...@zib.de, van...@math.tu-berlin.de

Dear Kiên and Manuel,

Thanks a lot for your replies - a lot of very useful information!

First: great news that REMix is on the way to being open source! Please

tell us if anyone can help with arguments or arm-twisting :-).

Nice to hear you are trying PyPSA-Eur via a REmix importer. Please

report how it goes and if we can help with that.

I think the annotation with the nomopyomo inside PyPSA should be

straightforward, since we keep close track of all the mappings

(obviously), so can output the block mappings in parallel.

Once one had some experience of how to do the blocking, one could do

e.g. guidelines for different types of problems

expansion/network/sector-coupled etc? So then people wouldn't have to

reinvent the wheel.

Just so I understand the shared versus distributed memory thing: shared

memory is memory on the same machine, but what exactly is the

distributed memory requirement for PIPS-IPM++? Is it sharing information

on files written to disk, or do you need some fancier shared memory

interface between the machines? (Showing my ignorance here...)

Best wishes,

Tom

Thanks a lot for your replies - a lot of very useful information!

First: great news that REMix is on the way to being open source! Please

tell us if anyone can help with arguments or arm-twisting :-).

Nice to hear you are trying PyPSA-Eur via a REmix importer. Please

report how it goes and if we can help with that.

I think the annotation with the nomopyomo inside PyPSA should be

straightforward, since we keep close track of all the mappings

(obviously), so can output the block mappings in parallel.

Once one had some experience of how to do the blocking, one could do

e.g. guidelines for different types of problems

expansion/network/sector-coupled etc? So then people wouldn't have to

reinvent the wheel.

Just so I understand the shared versus distributed memory thing: shared

memory is memory on the same machine, but what exactly is the

distributed memory requirement for PIPS-IPM++? Is it sharing information

on files written to disk, or do you need some fancier shared memory

interface between the machines? (Showing my ignorance here...)

Best wishes,

Tom

Jul 18, 2021, 5:36:34 PMJul 18

to Tom Brown, Fabian Hofmann, Manuel...@dlr.de, Karl-K...@dlr.de, openmod-i...@googlegroups.com, kem...@zib.de, rehf...@zib.de, van...@math.tu-berlin.de

Hello everyone,

Couple of points of my thoughts on this, since I by chance spent some time dreaming with Fabian Hoffmann about this, after digging into the Drivers code at [1] a couple of weeks ago.

Main interface code in that repo is at [2], which reads the GDX files built from having gams interpret the annotations and split them into separate problem files. The main thing to do is to build a StochInputTree with a root node (the part of the model annotated as 0, in the BestPracticeGuide) and then sub-nodes for each block of the model. For each node, it looks like you want mainly the constraint matrix, and its bounds, as well as the variable bounds (until line 352). The rest of the file is then mostly independent of GAMS (ok, writing the solution down is of course also GAMS specific in that case).

Since the StochInputNode object is standard PIPS (from before the IPM++ extension), one should be able to adapt their pyomo solvers or learn from the StructJuMP.jl interface [3].

For PyPSA, this would mean, the LP writing in linopt.py would have to be replaced with writing out the sparse matrices and their bounds instead of the LP file components. not immediately clear, what is the best way to do this but definitely possible. Fabian had the suggestion, that might fit more easily into his latest nomopyomo playground [4].

To your question, Tom, the machines are actively exchanging information using MPI.

Best,

Jonas

[3] https://github.com/StructJuMP/StructJuMP.jl, https://github.com/Argonne-National-Laboratory/StructJuMPSolverInterface.jl

To view this discussion on the web, visit https://groups.google.com/d/msgid/openmod-initiative/20210716130509.AAB867DDAA4_F18405B%40SPMA-01.tubit.win.tu-berlin.de.

Jul 18, 2021, 5:59:14 PMJul 18

to Tom Brown, Fabian Hofmann, Manuel...@dlr.de, Karl-K...@dlr.de, openmod-i...@googlegroups.com, kem...@zib.de, rehf...@zib.de, van...@math.tu-berlin.de

Small correction to the mail i just sent, inline,

On 18. Jul 2021, at 23:36, Jonas Hörsch <jonas....@posteo.de> wrote:Since the StochInputNode object is standard PIPS (from before the IPM++ extension), one should be able to adapt their pyomo solvers or learn from the StructJuMP.jl interface [3].

The StructJuMP interface contains glue to interface with PIPS-NLP and as such instead of building StochInputTree nodes hands in callbacks for retrieving jacobians and hessians ( https://github.com/Argonne-National-Laboratory/StructJuMPSolverInterface.jl/blob/julia0.7/src/pips_parallel_cfunc.jl#L543-L560 ), that is not helpful for PIPS-IPM.

Sorry for the diversion.

Jul 19, 2021, 3:42:09 AMJul 19

to jonas....@posteo.de, t.b...@tu-berlin.de, hof...@fias.uni-frankfurt.de, Karl-K...@dlr.de, openmod-i...@googlegroups.com, kem...@zib.de, rehf...@zib.de, van...@math.tu-berlin.de

Hi all,

__ __

Yes, PIPS-IPM++ is completely independent of GAMS, the only challenge is providing appropriate Files containing the block structure and a corresponding driver file to get the solver started. This
driver has to be adapted to the modelling language you are working with to work properly (or have some generic format such as an lp file, which however must contain additional annotation). The basic idea behind the block structure is that you have several
semi-independent parts of the problem and one part of the problem linking them together. In the most simple case imagine a stochastic optimization problem where you want to optimize the dispatch of conventional power plants according to an expected stochastic
feed-in of renewables. You decide on one dispatch profile in the first stage and then calculate total expected cost based on the probabilistic scenarios. This was also the initial use-case for StructJump together with the original solver PIPS-IPM. [1,2]

__ __

Main drawback of the original solver is that you can only consider linking variables since the blocks are completely independent aside from the first stage decision. To solve more general type of
models you also need to consider linking constraints (e.g. telling another block what the storage level of lithium ion batteries was in the previous block). This is possible in PIPS-IPM++, but not the original PIPS-IPM (Hence also the name suffix of ++).

__ __

For the people not so familiar with HPC and MPI: High Performance Computers (HPC) are typically optimized to have a large CPU performance – or nowadays GPU and or both of them. This means you have
a lot of machines but each machine has relatively low memory available (typically in a range of 100-200GB) which cannot fit the optimization matrix of large scale energy system models. This would be shared memory since one application runs on one machine accessing
one block of memory. PIPS-IPM++ now allows using multiple machines at the same time, each machine now only has assess to it’s individual memory, however also only needs to store a part of the optimization matrix. This is now distributed memory, since one application
uses the memory of multiple compute nodes. On the other hand, this also means the application itself has to run on multiple machines, but also benefits from more CPU – this is achieved by the Message Passing Interface (MPI) which allows communication between
nodes in a single application. The neat part now is we can build as many blocks as we like and distribute them to compute nodes. Having this flexibility allows us in theory to just increase the number of blocks if memory or time constraints are an issue –
in the real world you have other effects which can prevent this such as communication overhead or buggy libraries…

__ __

Hope this helps explain the underlying problem a bit better.

__ __

Best,

Manuel

__ __

[1]
https://ieeexplore.ieee.org/document/7069901

[2]
https://ieeexplore.ieee.org/document/6809706

__ __

Hello everyone,

__ __

Couple of points of my thoughts on this, since I by chance spent some time dreaming with Fabian Hoffmann about this, after digging into the Drivers code at [1] a couple of weeks ago.

__ __

Main interface code in that repo is at [2], which reads the GDX files built from having gams interpret the annotations and split them into separate problem files. The main thing to do is to build a StochInputTree with a root node (the part
of the model annotated as 0, in the BestPracticeGuide) and then sub-nodes for each block of the model. For each node, it looks like you want mainly the constraint matrix, and its bounds, as well as the variable bounds (until line 352). The rest of the file
is then mostly independent of GAMS (ok, writing the solution down is of course also GAMS specific in that case).

__ __

Since the StochInputNode object is standard PIPS (from before the IPM++ extension), one should be able to adapt their pyomo solvers or learn from the StructJuMP.jl interface [3].

__ __

For PyPSA, this would mean, the LP writing in linopt.py would have to be replaced with writing out the sparse matrices and their bounds instead of the LP file components. not immediately clear, what is the best way to do this but definitely
possible. Fabian had the suggestion, that might fit more easily into his latest nomopyomo playground [4].

Jul 19, 2021, 3:50:21 AMJul 19

to Manuel...@dlr.de, jonas....@posteo.de, t.b...@tu-berlin.de, hof...@fias.uni-frankfurt.de, karl-k...@dlr.de, openmod-i...@googlegroups.com, kem...@zib.de, rehf...@zib.de, van...@math.tu-berlin.de

Hi all, very interesting,

when I first read the paper I didn't get the technical details but I found

it quite impressive.

I guess when you run large-scale energy system models, you have three

potentials constraints: run time, memory and numerical trouble.

I guess runtime which should split into presolve, factorisation, solve and

potentially crossover (which u dont use) should scale with the size of the

energy system model like:

presolve and solve a little, factorisation and crossover alot,

so one would expect even larger speedups for larger problems via

parallelisation?

However, how is it with potential numerical trouble, which barrier is

relatively prone to in comparison to simplex?

Do the standard recipes (choose certain ranges of objective/variables

where u don't run into trouble with the dual) help here as well? Can you

make an estimate? Or are numerical issues no problem at all?

If any of the above is wrong, please correct me.

Best regards

Alex

> https://github.com/dRehfeldt/PIPS/blob/master/PIPS-IPM/Drivers<https://github.com/dRehfeldt/PIPS/blob/master/PIPS-IPM/Drivers/gmspips/gmspips.cpp>

>

when I first read the paper I didn't get the technical details but I found

it quite impressive.

I guess when you run large-scale energy system models, you have three

potentials constraints: run time, memory and numerical trouble.

I guess runtime which should split into presolve, factorisation, solve and

potentially crossover (which u dont use) should scale with the size of the

energy system model like:

presolve and solve a little, factorisation and crossover alot,

so one would expect even larger speedups for larger problems via

parallelisation?

However, how is it with potential numerical trouble, which barrier is

relatively prone to in comparison to simplex?

Do the standard recipes (choose certain ranges of objective/variables

where u don't run into trouble with the dual) help here as well? Can you

make an estimate? Or are numerical issues no problem at all?

If any of the above is wrong, please correct me.

Best regards

Alex

>

> [2]

> https://github.com/dRehfeldt/PIPS/blob/master/PIPS-IPM/Drivers/gmspips/gmspips.cpp

>

> [3] https://github.com/StructJuMP/StructJuMP.jl,

> https://github.com/Argonne-National-Laboratory/StructJuMPSolverInterface.jl

>

> [4] https://github.com/PyPSA/linopy

>

> [2]

> https://github.com/dRehfeldt/PIPS/blob/master/PIPS-IPM/Drivers/gmspips/gmspips.cpp

>

> [3] https://github.com/StructJuMP/StructJuMP.jl,

> https://github.com/Argonne-National-Laboratory/StructJuMPSolverInterface.jl

>

> [4] https://github.com/PyPSA/linopy

>

> --

> You received this message because you are subscribed to the Google Groups

> "openmod initiative" group.

> To unsubscribe from this group and stop receiving emails from it, send an

> email to openmod-initiat...@googlegroups.com.

> To view this discussion on the web, visit

> https://groups.google.com/d/msgid/openmod-initiative/04583950226447a3877f53b1668e8657%40dlr.de.
> You received this message because you are subscribed to the Google Groups

> "openmod initiative" group.

> To unsubscribe from this group and stop receiving emails from it, send an

> email to openmod-initiat...@googlegroups.com.

> To view this discussion on the web, visit

>

Jul 19, 2021, 4:13:45 AMJul 19

to ki...@fias.uni-frankfurt.de, jonas....@posteo.de, t.b...@tu-berlin.de, hof...@fias.uni-frankfurt.de, Karl-K...@dlr.de, openmod-i...@googlegroups.com, kem...@zib.de, rehf...@zib.de, van...@math.tu-berlin.de

Hi Alex,

In our experience the main bottleneck is only the solve phase, since this phase can take up several hours to days. For PIPS-IPM presolve also has to work block structure preserving, but is therefore also quite fast. Crossover can be skipped in most cases, since the Barrier solution is accurate enough for most purposes (and there are almost no ESM instances I am aware of the be able to solve in a reasonable time).

One bottleneck you missed can be the model generation, which is technically not part of the solver phase, but has to be done nonetheless. This can be a bottleneck, especially when writing out lp files or badly optimized code can lead to longer generation times compared to the actual solution phase, especially if you use a parallel solver.

Yes, I would rank numerical problems also as the current main limitation of PIPS-IPM, partly due to Barrier but also partly to the formulation of energy system optimization models. Standard recipes should apply here as well, since most of the algorithm works in a very similar way compared to the commercial ones. We are currently looking into different model formulations and their impact on the solve times for commercial solvers and PIPS-IPM++.

Maybe It would be a good idea to have a session in a future OpenMod workshop on the whole topic or even an stand-alone dedicated workshop on this. Would also be great to see some other approaches such as heuristic methods (we did some comparisons on this as well [1]) or other decomposition techniques. I know some people working on Benders and Enhanced Lagrangian methods for energy system models.

Best,

Manuel

[1] https://ieeexplore.ieee.org/document/9248780 https://elib.dlr.de/128258/

> -----Ursprüngliche Nachricht-----

> Von: ki...@fias.uni-frankfurt.de <ki...@fias.uni-frankfurt.de>

> Gesendet: 19 July 2021 09:50

> An: Wetzel, Manuel <Manuel...@dlr.de>

> Cc: jonas....@posteo.de; t.b...@tu-berlin.de; hof...@fias.uni-

> frankfurt.de; Cao, Karl-Kien <Karl-K...@dlr.de>; openmod-

> initi...@googlegroups.com; kem...@zib.de; rehf...@zib.de;

> van...@math.tu-berlin.de

> Betreff: Re: AW: [openmod-initiative] A massively parallel interior-point

In our experience the main bottleneck is only the solve phase, since this phase can take up several hours to days. For PIPS-IPM presolve also has to work block structure preserving, but is therefore also quite fast. Crossover can be skipped in most cases, since the Barrier solution is accurate enough for most purposes (and there are almost no ESM instances I am aware of the be able to solve in a reasonable time).

One bottleneck you missed can be the model generation, which is technically not part of the solver phase, but has to be done nonetheless. This can be a bottleneck, especially when writing out lp files or badly optimized code can lead to longer generation times compared to the actual solution phase, especially if you use a parallel solver.

Yes, I would rank numerical problems also as the current main limitation of PIPS-IPM, partly due to Barrier but also partly to the formulation of energy system optimization models. Standard recipes should apply here as well, since most of the algorithm works in a very similar way compared to the commercial ones. We are currently looking into different model formulations and their impact on the solve times for commercial solvers and PIPS-IPM++.

Maybe It would be a good idea to have a session in a future OpenMod workshop on the whole topic or even an stand-alone dedicated workshop on this. Would also be great to see some other approaches such as heuristic methods (we did some comparisons on this as well [1]) or other decomposition techniques. I know some people working on Benders and Enhanced Lagrangian methods for energy system models.

Best,

Manuel

[1] https://ieeexplore.ieee.org/document/9248780 https://elib.dlr.de/128258/

> -----Ursprüngliche Nachricht-----

> Von: ki...@fias.uni-frankfurt.de <ki...@fias.uni-frankfurt.de>

> Gesendet: 19 July 2021 09:50

> An: Wetzel, Manuel <Manuel...@dlr.de>

> Cc: jonas....@posteo.de; t.b...@tu-berlin.de; hof...@fias.uni-

> frankfurt.de; Cao, Karl-Kien <Karl-K...@dlr.de>; openmod-

> initi...@googlegroups.com; kem...@zib.de; rehf...@zib.de;

> van...@math.tu-berlin.de

> Betreff: Re: AW: [openmod-initiative] A massively parallel interior-point

Jul 19, 2021, 4:22:38 AMJul 19

to openmod-i...@googlegroups.com

Hi people

I am seeking some confirmation too. I just typed this out but I think

my query is answered below in any case. Let's assume complete numerical

precision for sake of argument! If I send my energy system problem,

duly linearized for non‑convex relations, to:

* a basic Simplex solver (like the one from Numerical Recipes, see below)

Or alternatively:

* identify some block structure and solve the system using PIPS-IPM++

and friends

Will I get identical results? Or is there some additional

compartmentalization occurring with PIPS-IPM++ via the block structure?

By the way, the first energy system framework I used embedded modified

Simplex code from Press et al (1992). Just a few tens of lines of C.

with best wishes, Robbie

REFERENCES

Press, William H, Brian P Flannery, Saul A Teukolsky, and William T

Vetterling (30 October 1992). "Numerical recipes in C: the art of

scientific computing (2nd ed)". Cambridge, United Kingdom: Cambridge

University Press. ISBN 978-052143108-8. All code is copyright of the

publisher.

--

Robbie Morrison

Address: Schillerstrasse 85, 10627 Berlin, Germany

Phone: +49.30.612-87617

I am seeking some confirmation too. I just typed this out but I think

my query is answered below in any case. Let's assume complete numerical

precision for sake of argument! If I send my energy system problem,

duly linearized for non‑convex relations, to:

* a basic Simplex solver (like the one from Numerical Recipes, see below)

Or alternatively:

* identify some block structure and solve the system using PIPS-IPM++

and friends

Will I get identical results? Or is there some additional

compartmentalization occurring with PIPS-IPM++ via the block structure?

By the way, the first energy system framework I used embedded modified

Simplex code from Press et al (1992). Just a few tens of lines of C.

with best wishes, Robbie

REFERENCES

Press, William H, Brian P Flannery, Saul A Teukolsky, and William T

Vetterling (30 October 1992). "Numerical recipes in C: the art of

scientific computing (2nd ed)". Cambridge, United Kingdom: Cambridge

University Press. ISBN 978-052143108-8. All code is copyright of the

publisher.

Robbie Morrison

Address: Schillerstrasse 85, 10627 Berlin, Germany

Phone: +49.30.612-87617

Jul 19, 2021, 4:48:40 AMJul 19

to Nils-Christian Kempke, Manuel...@dlr.de, ki...@fias.uni-frankfurt.de, t.b...@tu-berlin.de, hof...@fias.uni-frankfurt.de, Karl-K...@dlr.de, openmod-i...@googlegroups.com, rehf...@zib.de, van...@math.tu-berlin.de

Hi,

I am enjoying this conversation very much, thanks to all of you insightful people on the list.

@Manuel: Thanks for pointing out the significant difference of introducing linking constraints.

@Nils: Great, I was confused why GAMS would write out one GDX per block, when they show up as explicit matrix objects in the PIPS-IPM++ interface. Lazy is the key. At least for PyPSA, I expect a file-based rather than the complication of a callback based interface would make a lot of sense.

Do you have documentation for the individual matrix building blocks in the initialisation of the StochInputTree? What do you think is the easiest binary format, one could write out a block and its linking constraints in? Some variant of the rawInput structure? Maybe actually it is easiest to just jump also on the METIS project formats (even though text-based, LP-file reading never was much of a bottle neck even though text based). Is there documentation somewhere?

Thanks a lot,

Jonas

> On 19. Jul 2021, at 10:24, Nils-Christian Kempke <kem...@zib.de> wrote:

>

> Hi all,

>

> to add some more technical info on how PIPS reads problems (because that came up like 5 mails ago :D):

>

> Internally PIPS (in the extended version) will always use a CallbackTree to build the problem from - so indeed, supplying some means of callbacks to PIPS which can be used to query the individual problem parts would suffice for us to (maybe with some adaption) run it.

>

> The main thing here is probably, that, since PIPS runs as a set of connected but independent processes and if the problem is large, one would ideally only want to supply the subpart of Callbacks to a process of PIPS that is actually needed to run said process (and especially only hold that in main memory). Currently, we have two file-based interfaces (GAMS and one with the METIS project at Jülich which is based on txt files - not ideal but working). The text based interfaces will use the same CallbackTree and structure to build the problem. Basically, PIPS will produce a bunch of File-Callbacks from the given files which then will be lazily read (depending on whether or not a certain process in PIPS actually needs to read a certain block). That way, each process will also only hold part of the model in main memory.

>

> GAMS does exactly what was mentioned before - write individual parts of the matrix (eq + ineq) and bounds for variables + part of the objective into gdx files and additionally they supply a way for us on PIPS side to again read those files and query the files. From that, we build the CallbackTree and from that we run PIPS.

>

> The interface with METIS does something similar but for the beginning we just used txt files to write and read data. So on their side we write our parts of the problem into the txt files and on our side we implemented a reader, similar to the GAMS one that is then used to build the CallbackTree from - this is not ideal since binary files or other thing might have been a better choice for reading and writing (since faster) but works, even though a bit slower.

>

> In the two scenarios above, PIPS will be started outside of GAMS or python with a system call. There is certainly also the possibility to supply the Callbacks directly via some interface in a certain program - we would need to figure out how to run that efficiently in parallel then but should be just the same. In that case, PIPS would not need to use a reader for some possibly now file format but the task of reading part of the model would actually lie on the modeler's/caller's side.

>

> On more thought would be to hand PIPS the whole LP or MPS file and some annotation - that would definitely be the most user friendly approach, the annotation would contain a mapping of variable to block as well as constraint to block and that would be all a possible user would have to supply. The drawback here would be obviously that every process would have to extract all information it needs for solving the problem on its own (and thus, would also have to read the whole file at least once) - this access to PIPS is not implemented currently and I am not sure about the performance of it either (as compared to parallel reading and Callbacks) but yes - it might be a bit more involved but could work (even though definitely slower in reading then).

>

> One last point: PIPS works on sparse matrices and it is certainly not a good idea to keep huge problems dense in memory (just because we ran into those issues with the METIS project). A possible file should ideally contain matrices in a sparse format.

>

>

> On the scaling: system factorization and all linear algebra techniques for the whole model in PIPS should scale reasonably well - presolve too is implemented in parallel and thus scales. The scaling will be limited by high amounts of linking variables and constraints.

>

> Best,

>

> Nils

I am enjoying this conversation very much, thanks to all of you insightful people on the list.

@Manuel: Thanks for pointing out the significant difference of introducing linking constraints.

@Nils: Great, I was confused why GAMS would write out one GDX per block, when they show up as explicit matrix objects in the PIPS-IPM++ interface. Lazy is the key. At least for PyPSA, I expect a file-based rather than the complication of a callback based interface would make a lot of sense.

Do you have documentation for the individual matrix building blocks in the initialisation of the StochInputTree? What do you think is the easiest binary format, one could write out a block and its linking constraints in? Some variant of the rawInput structure? Maybe actually it is easiest to just jump also on the METIS project formats (even though text-based, LP-file reading never was much of a bottle neck even though text based). Is there documentation somewhere?

Thanks a lot,

Jonas

> On 19. Jul 2021, at 10:24, Nils-Christian Kempke <kem...@zib.de> wrote:

>

> Hi all,

>

> to add some more technical info on how PIPS reads problems (because that came up like 5 mails ago :D):

>

> Internally PIPS (in the extended version) will always use a CallbackTree to build the problem from - so indeed, supplying some means of callbacks to PIPS which can be used to query the individual problem parts would suffice for us to (maybe with some adaption) run it.

>

> The main thing here is probably, that, since PIPS runs as a set of connected but independent processes and if the problem is large, one would ideally only want to supply the subpart of Callbacks to a process of PIPS that is actually needed to run said process (and especially only hold that in main memory). Currently, we have two file-based interfaces (GAMS and one with the METIS project at Jülich which is based on txt files - not ideal but working). The text based interfaces will use the same CallbackTree and structure to build the problem. Basically, PIPS will produce a bunch of File-Callbacks from the given files which then will be lazily read (depending on whether or not a certain process in PIPS actually needs to read a certain block). That way, each process will also only hold part of the model in main memory.

>

> GAMS does exactly what was mentioned before - write individual parts of the matrix (eq + ineq) and bounds for variables + part of the objective into gdx files and additionally they supply a way for us on PIPS side to again read those files and query the files. From that, we build the CallbackTree and from that we run PIPS.

>

> The interface with METIS does something similar but for the beginning we just used txt files to write and read data. So on their side we write our parts of the problem into the txt files and on our side we implemented a reader, similar to the GAMS one that is then used to build the CallbackTree from - this is not ideal since binary files or other thing might have been a better choice for reading and writing (since faster) but works, even though a bit slower.

>

> In the two scenarios above, PIPS will be started outside of GAMS or python with a system call. There is certainly also the possibility to supply the Callbacks directly via some interface in a certain program - we would need to figure out how to run that efficiently in parallel then but should be just the same. In that case, PIPS would not need to use a reader for some possibly now file format but the task of reading part of the model would actually lie on the modeler's/caller's side.

>

> On more thought would be to hand PIPS the whole LP or MPS file and some annotation - that would definitely be the most user friendly approach, the annotation would contain a mapping of variable to block as well as constraint to block and that would be all a possible user would have to supply. The drawback here would be obviously that every process would have to extract all information it needs for solving the problem on its own (and thus, would also have to read the whole file at least once) - this access to PIPS is not implemented currently and I am not sure about the performance of it either (as compared to parallel reading and Callbacks) but yes - it might be a bit more involved but could work (even though definitely slower in reading then).

>

> One last point: PIPS works on sparse matrices and it is certainly not a good idea to keep huge problems dense in memory (just because we ran into those issues with the METIS project). A possible file should ideally contain matrices in a sparse format.

>

>

> On the scaling: system factorization and all linear algebra techniques for the whole model in PIPS should scale reasonably well - presolve too is implemented in parallel and thus scales. The scaling will be limited by high amounts of linking variables and constraints.

>

> Best,

>

> Nils

Jul 19, 2021, 6:12:39 AMJul 19

to Jonas Hörsch, Manuel...@dlr.de, ki...@fias.uni-frankfurt.de, t.b...@tu-berlin.de, hof...@fias.uni-frankfurt.de, Karl-K...@dlr.de, openmod-i...@googlegroups.com, rehf...@zib.de, van...@math.tu-berlin.de

Hi,

RawInput is an old PIPS interface written at Argonne which can only be

used (and is not currently maintained) for Stochastic Problems (so

2-stage stochastic problems which generally don't have linking

constraints) - it assumes that all diagonal blocks except the first

stage ones are actually the same and of same size etc. and does not

allow for linking constraints (I think there is a method in there called

getLinkingConstraints - so maybe they cannot deal with linking variables

then - but its potato patato if you transpose the matrix).

Generally an interface like that would be quite nice - I am not quite

sure what you mean by rawInput structure but yes: The file we read in

should contain some means all matrices and vectors needed for the block

k where k = 1...N. Additionally there would be the 0-block file. I

attached a picture from the METIS group which makes this a bit clearer -

it is also the best documentation of individual problem parts I guess

(apart from what is written in papers). We are quite free in how the

stuff is written to files I think - for the METIS project we just

defined a order and then wrote out the respective parts of the problem

in plain text - then read the parts in the same way. Sadly, there is no

documentation on that since it is also still work in progress - and I

think I should be improved a bit more because the reading is still not

as fast as the GAMS gdx files. The reading will only become a real issue

when one uses e.g. one process to read in a problem with 1000 blocks -

the overhead in the PIPS read will be quite big and the read slower than

reading one file (eg one LP file).

Generally one block file needs to somehow contain info on all the parts

marked with its index in the attached picture (except x ;) ) - so for k

in 1 .. N a block file would contain c (the objective) A (linking

variable part), B (the diagonal part), Bl (linking constraints part) and

b (right hand side). Inequalities are similar. Additionally one would

have the bounds on x_k so xl and xu.

The zero block would contain all the rest (so everything marked with

zero + linking constraint bounds and linking variable bounds).

Currently, I have no idea what the fastest way to read in stuff actually

would be - but I guess we can figure something out.

Best,

Nils

RawInput is an old PIPS interface written at Argonne which can only be

used (and is not currently maintained) for Stochastic Problems (so

2-stage stochastic problems which generally don't have linking

constraints) - it assumes that all diagonal blocks except the first

stage ones are actually the same and of same size etc. and does not

allow for linking constraints (I think there is a method in there called

getLinkingConstraints - so maybe they cannot deal with linking variables

then - but its potato patato if you transpose the matrix).

Generally an interface like that would be quite nice - I am not quite

sure what you mean by rawInput structure but yes: The file we read in

should contain some means all matrices and vectors needed for the block

k where k = 1...N. Additionally there would be the 0-block file. I

attached a picture from the METIS group which makes this a bit clearer -

it is also the best documentation of individual problem parts I guess

(apart from what is written in papers). We are quite free in how the

stuff is written to files I think - for the METIS project we just

defined a order and then wrote out the respective parts of the problem

in plain text - then read the parts in the same way. Sadly, there is no

documentation on that since it is also still work in progress - and I

think I should be improved a bit more because the reading is still not

as fast as the GAMS gdx files. The reading will only become a real issue

when one uses e.g. one process to read in a problem with 1000 blocks -

the overhead in the PIPS read will be quite big and the read slower than

reading one file (eg one LP file).

Generally one block file needs to somehow contain info on all the parts

marked with its index in the attached picture (except x ;) ) - so for k

in 1 .. N a block file would contain c (the objective) A (linking

variable part), B (the diagonal part), Bl (linking constraints part) and

b (right hand side). Inequalities are similar. Additionally one would

have the bounds on x_k so xl and xu.

The zero block would contain all the rest (so everything marked with

zero + linking constraint bounds and linking variable bounds).

Currently, I have no idea what the fastest way to read in stuff actually

would be - but I guess we can figure something out.

Best,

Nils

Jul 20, 2021, 4:59:24 AMJul 20

to robbie....@posteo.de, openmod-i...@googlegroups.com

Hi Robbie,

Complete numerical precision is precisely where the two main methods simplex and barrier differ:

During a simplex method you move from one corner of the solution space along the most promising edge into the direction of the objective function. This means you will always end up at a basic solution (meaning exact values) when you finish the algorithm.

During the barrier method you move through the middle of the solution space in the direction of the objective function while maintaining a certain distance from all edges of the solution space. By gradually getting closer to the edges of you solution space you reduce the step size per iteration. This however means you will only end up in the close proximity of the objective values and achieve a non-basic solution (e.g. variables which are 0 in the simplex algorithm are now only close to zero such as 1e-9). You now can optionally use the crossover to refine the solution, a simplex algorithm starting from the barrier solution. Then you have achieved the exact solution in both cases.

A good visualization for this can be found on the German Wikipedia pages [1,2]

The solutions from simplex and barrier without crossover already differ for the same commercial solver. Since PIPS is a parallel barrier algorithm this means we can achieve the same quality as a non-parallel barrier algorithm (namely by checking the duality gap between primal and dual solution and keeping it below a certain convergence criteria). Similarly Benders and Lagrange also lead to the global optima, whereas heuristic methods such as rolling horizon will differ from the global objective since they effectively modify the properties of the optimization problem itself.

Best,

Manuel

[1] https://de.wikipedia.org/wiki/Simplex-Verfahren

[2] https://de.wikipedia.org/wiki/Innere-Punkte-Verfahren

> initi...@googlegroups.com> Im Auftrag von Robbie Morrison

> Gesendet: 19 July 2021 10:23

> An: openmod-i...@googlegroups.com

> Betreff: Re: AW: AW: [openmod-initiative] A massively parallel interior-point

> https://groups.google.com/d/msgid/openmod-initiative/54c32ab7-f0bb-

> f444-7f6a-66923e42696b%40posteo.de.

Complete numerical precision is precisely where the two main methods simplex and barrier differ:

During a simplex method you move from one corner of the solution space along the most promising edge into the direction of the objective function. This means you will always end up at a basic solution (meaning exact values) when you finish the algorithm.

During the barrier method you move through the middle of the solution space in the direction of the objective function while maintaining a certain distance from all edges of the solution space. By gradually getting closer to the edges of you solution space you reduce the step size per iteration. This however means you will only end up in the close proximity of the objective values and achieve a non-basic solution (e.g. variables which are 0 in the simplex algorithm are now only close to zero such as 1e-9). You now can optionally use the crossover to refine the solution, a simplex algorithm starting from the barrier solution. Then you have achieved the exact solution in both cases.

A good visualization for this can be found on the German Wikipedia pages [1,2]

The solutions from simplex and barrier without crossover already differ for the same commercial solver. Since PIPS is a parallel barrier algorithm this means we can achieve the same quality as a non-parallel barrier algorithm (namely by checking the duality gap between primal and dual solution and keeping it below a certain convergence criteria). Similarly Benders and Lagrange also lead to the global optima, whereas heuristic methods such as rolling horizon will differ from the global objective since they effectively modify the properties of the optimization problem itself.

Best,

Manuel

[1] https://de.wikipedia.org/wiki/Simplex-Verfahren

[2] https://de.wikipedia.org/wiki/Innere-Punkte-Verfahren

> initi...@googlegroups.com> Im Auftrag von Robbie Morrison

> Gesendet: 19 July 2021 10:23

> An: openmod-i...@googlegroups.com

> Betreff: Re: AW: AW: [openmod-initiative] A massively parallel interior-point

> f444-7f6a-66923e42696b%40posteo.de.

Jul 20, 2021, 5:16:56 AMJul 20

to openmod-i...@googlegroups.com

Many thanks Manuel

And just to note that barrier method and interior-point method are

synonyms: https://en.wikipedia.org/wiki/Interior-point_method

Robbie

And just to note that barrier method and interior-point method are

synonyms: https://en.wikipedia.org/wiki/Interior-point_method

Robbie

Jul 23, 2021, 6:46:58 AMJul 23

to Tom Brown, openmod list, Kempke, Daniel Rehfeldt, Charlie Vanaret

Hi everyone,

My name is Zanko, a power analyst for the CWE (Central Western Europe) working for an Energy trading firm. I've been watching the community's activity on the sideline for the past 1-2 years. This is the first time I'm very excited at the prospect of what has been discussed with regards to its potential to alleviate some practice-related problems and bottlenecks I face on a daily basis. I have been using an LP dispatch model for my forecasts. Essentially a linearized MIP model, the model may not be as accurate as MIP but computationally less expensive as I have to run the model at days up to 4 times for different optimization periods. However, for optimization periods upwards of 2 weeks, running 10X scenarios could take up the whole day.

I'm eager to discuss the idea of a paid collaboration whereby the outcome would be implementing the PIPS-IPM++ solver into my model. In terms of performance metrics, our model takes up to 10min to solve a 6-scenario 3-week optimization period, hourly resolution for 5 price zones DE/FR/NL/BE/AT. Even slashing this benchmark by 50% would be a major achievement in my eyes.

Please feel free to get in touch with me to discuss finer details and terms of agreement.

Looking forward to hearing from you,

Best Regards

Zanko

--

You received this message because you are subscribed to the Google Groups "openmod initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to openmod-initiat...@googlegroups.com.

To view this discussion on the web, visit https://groups.google.com/d/msgid/openmod-initiative/20210716084449.BF5F736F92_F14701B%40SPMA-02.tubit.win.tu-berlin.de.

Sep 14, 2021, 8:11:50 AMSep 14

to openmod-i...@googlegroups.com

Hi all,

I just saw these very nice slides by the DLR PIPS-IPM++ cohort:

https://elib.dlr.de/143744/1/20210901_OR21_Cao.pdf

The speed-up and memory reduction by an order of magnitude as well as

the size of problems solved (PyPSA-Eur with 3500 nodes, 8760 snapshots)

are very impressive!

That was through the PyPSA-REMix conversion Kiên mentioned earlier, right?

Did solving the large model require a lot of tweaking and parameter

tuning of the solver? How would you rate the experience? Eventually,

developing guidelines for suitable blocking strategies would be super

helpful.

Also, Jonas Hörsch and Fabian Hofmann were not only daydreaming about

PIPS IPM++ support for PyPSA. They have started a draft here:

https://github.com/PyPSA/linopy/pull/4

Linopy extracts the nomopyomo code of PyPSA into a more general package,

so that the efficiency gains can be leveraged by others, too. Think of

it as an efficient xarray-based substitute for pyomo (restricted to LPs).

Best wishes,

Fabian

--

Fabian Neumann

TU Berlin

www.neumann.fyi

I just saw these very nice slides by the DLR PIPS-IPM++ cohort:

https://elib.dlr.de/143744/1/20210901_OR21_Cao.pdf

The speed-up and memory reduction by an order of magnitude as well as

the size of problems solved (PyPSA-Eur with 3500 nodes, 8760 snapshots)

are very impressive!

That was through the PyPSA-REMix conversion Kiên mentioned earlier, right?

Did solving the large model require a lot of tweaking and parameter

tuning of the solver? How would you rate the experience? Eventually,

developing guidelines for suitable blocking strategies would be super

helpful.

Also, Jonas Hörsch and Fabian Hofmann were not only daydreaming about

PIPS IPM++ support for PyPSA. They have started a draft here:

https://github.com/PyPSA/linopy/pull/4

Linopy extracts the nomopyomo code of PyPSA into a more general package,

so that the efficiency gains can be leveraged by others, too. Think of

it as an efficient xarray-based substitute for pyomo (restricted to LPs).

Best wishes,

Fabian

--

Fabian Neumann

TU Berlin

www.neumann.fyi

Sep 21, 2021, 8:59:19 AMSep 21

to fabian....@kit.edu, openmod-i...@googlegroups.com, Oriol.Rave...@dlr.de

Hi Fabian,

Yes, those were the results we obtained by converting the dataset to out REMix model. Most of the time for the conversion of the dataset was spent on fixing minor issues (e.g. PyPSA doesn't mind links and lines using the same names, REMix does since it treats links and lines the same way) and some numerical issues which were a bit hard to find. After we could solve the model with commercial solvers the remaining splitting into block structure and parallel solving went quite smoothly - except some issues with out-of-memory in the non-parallel phase of the algorithm. These issues were to be expected to happen at some point, so at least now we can also quantify when they happen - in the case of the PyPSA-eur instances the problem starts to occur with the capacity expansion problem with 512 regions. At least the economic dispatch variations can be solved up to the full size already, which is quite nice, especially also looking in the near future with joint electricity and gas networks.

As for the annotation we currently still use the decomposition into timesteps (or snapshots in the PyPSA lingo). This ensures the individual blocks contain a similar level of complexity which is quite important for parallel applications. Worst case would be a full compute cluster waiting for an individual block which contains most of the model complexity.

I'm really looking forward on your progress with linopy. Our colleagues in Oldenburg (Oriol in CC) are also looking into solving some of their eGon models created in the MODEX-NET projects with PIPS-IPM++, so this might be a format for some cooperation.

Best,

Manuel

-----Ursprüngliche Nachricht-----

Von: openmod-i...@googlegroups.com <openmod-i...@googlegroups.com> Im Auftrag von Fabian Neumann

Gesendet: Dienstag, 14. September 2021 14:12

An: openmod-i...@googlegroups.com

Betreff: Re: AW: AW: AW: [openmod-initiative] A massively parallel interior-point solver for linear energy system models with block structure

To view this discussion on the web, visit https://groups.google.com/d/msgid/openmod-initiative/407b3234-f1c9-088a-7655-1f326caabc2f%40kit.edu.

Yes, those were the results we obtained by converting the dataset to out REMix model. Most of the time for the conversion of the dataset was spent on fixing minor issues (e.g. PyPSA doesn't mind links and lines using the same names, REMix does since it treats links and lines the same way) and some numerical issues which were a bit hard to find. After we could solve the model with commercial solvers the remaining splitting into block structure and parallel solving went quite smoothly - except some issues with out-of-memory in the non-parallel phase of the algorithm. These issues were to be expected to happen at some point, so at least now we can also quantify when they happen - in the case of the PyPSA-eur instances the problem starts to occur with the capacity expansion problem with 512 regions. At least the economic dispatch variations can be solved up to the full size already, which is quite nice, especially also looking in the near future with joint electricity and gas networks.

As for the annotation we currently still use the decomposition into timesteps (or snapshots in the PyPSA lingo). This ensures the individual blocks contain a similar level of complexity which is quite important for parallel applications. Worst case would be a full compute cluster waiting for an individual block which contains most of the model complexity.

I'm really looking forward on your progress with linopy. Our colleagues in Oldenburg (Oriol in CC) are also looking into solving some of their eGon models created in the MODEX-NET projects with PIPS-IPM++, so this might be a format for some cooperation.

Best,

Manuel

-----Ursprüngliche Nachricht-----

Von: openmod-i...@googlegroups.com <openmod-i...@googlegroups.com> Im Auftrag von Fabian Neumann

Gesendet: Dienstag, 14. September 2021 14:12

An: openmod-i...@googlegroups.com

Betreff: Re: AW: AW: AW: [openmod-initiative] A massively parallel interior-point solver for linear energy system models with block structure

Reply all

Reply to author

Forward

0 new messages