To view this discussion on the web, visit https://groups.google.com/d/msgid/openmod-initiative/20210716130509.AAB867DDAA4_F18405B%40SPMA-01.tubit.win.tu-berlin.de.
On 18. Jul 2021, at 23:36, Jonas Hörsch <jonas....@posteo.de> wrote:Since the StochInputNode object is standard PIPS (from before the IPM++ extension), one should be able to adapt their pyomo solvers or learn from the StructJuMP.jl interface [3].
Hi all,
Yes, PIPS-IPM++ is completely independent of GAMS, the only challenge is providing appropriate Files containing the block structure and a corresponding driver file to get the solver started. This driver has to be adapted to the modelling language you are working with to work properly (or have some generic format such as an lp file, which however must contain additional annotation). The basic idea behind the block structure is that you have several semi-independent parts of the problem and one part of the problem linking them together. In the most simple case imagine a stochastic optimization problem where you want to optimize the dispatch of conventional power plants according to an expected stochastic feed-in of renewables. You decide on one dispatch profile in the first stage and then calculate total expected cost based on the probabilistic scenarios. This was also the initial use-case for StructJump together with the original solver PIPS-IPM. [1,2]
Main drawback of the original solver is that you can only consider linking variables since the blocks are completely independent aside from the first stage decision. To solve more general type of models you also need to consider linking constraints (e.g. telling another block what the storage level of lithium ion batteries was in the previous block). This is possible in PIPS-IPM++, but not the original PIPS-IPM (Hence also the name suffix of ++).
For the people not so familiar with HPC and MPI: High Performance Computers (HPC) are typically optimized to have a large CPU performance – or nowadays GPU and or both of them. This means you have a lot of machines but each machine has relatively low memory available (typically in a range of 100-200GB) which cannot fit the optimization matrix of large scale energy system models. This would be shared memory since one application runs on one machine accessing one block of memory. PIPS-IPM++ now allows using multiple machines at the same time, each machine now only has assess to it’s individual memory, however also only needs to store a part of the optimization matrix. This is now distributed memory, since one application uses the memory of multiple compute nodes. On the other hand, this also means the application itself has to run on multiple machines, but also benefits from more CPU – this is achieved by the Message Passing Interface (MPI) which allows communication between nodes in a single application. The neat part now is we can build as many blocks as we like and distribute them to compute nodes. Having this flexibility allows us in theory to just increase the number of blocks if memory or time constraints are an issue – in the real world you have other effects which can prevent this such as communication overhead or buggy libraries…
Hope this helps explain the underlying problem a bit better.
Best,
Manuel
[1] https://ieeexplore.ieee.org/document/7069901
[2] https://ieeexplore.ieee.org/document/6809706
Hello everyone,
Couple of points of my thoughts on this, since I by chance spent some time dreaming with Fabian Hoffmann about this, after digging into the Drivers code at [1] a couple of weeks ago.
Main interface code in that repo is at [2], which reads the GDX files built from having gams interpret the annotations and split them into separate problem files. The main thing to do is to build a StochInputTree with a root node (the part of the model annotated as 0, in the BestPracticeGuide) and then sub-nodes for each block of the model. For each node, it looks like you want mainly the constraint matrix, and its bounds, as well as the variable bounds (until line 352). The rest of the file is then mostly independent of GAMS (ok, writing the solution down is of course also GAMS specific in that case).
Since the StochInputNode object is standard PIPS (from before the IPM++ extension), one should be able to adapt their pyomo solvers or learn from the StructJuMP.jl interface [3].
For PyPSA, this would mean, the LP writing in linopt.py would have to be replaced with writing out the sparse matrices and their bounds instead of the LP file components. not immediately clear, what is the best way to do this but definitely possible. Fabian had the suggestion, that might fit more easily into his latest nomopyomo playground [4].
--
You received this message because you are subscribed to the Google Groups "openmod initiative" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openmod-initiat...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/openmod-initiative/20210716084449.BF5F736F92_F14701B%40SPMA-02.tubit.win.tu-berlin.de.