On 18. Jul 2021, at 23:36, Jonas Hörsch <jonas....@posteo.de> wrote:Since the StochInputNode object is standard PIPS (from before the IPM++ extension), one should be able to adapt their pyomo solvers or learn from the StructJuMP.jl interface .
Yes, PIPS-IPM++ is completely independent of GAMS, the only challenge is providing appropriate Files containing the block structure and a corresponding driver file to get the solver started. This driver has to be adapted to the modelling language you are working with to work properly (or have some generic format such as an lp file, which however must contain additional annotation). The basic idea behind the block structure is that you have several semi-independent parts of the problem and one part of the problem linking them together. In the most simple case imagine a stochastic optimization problem where you want to optimize the dispatch of conventional power plants according to an expected stochastic feed-in of renewables. You decide on one dispatch profile in the first stage and then calculate total expected cost based on the probabilistic scenarios. This was also the initial use-case for StructJump together with the original solver PIPS-IPM. [1,2]
Main drawback of the original solver is that you can only consider linking variables since the blocks are completely independent aside from the first stage decision. To solve more general type of models you also need to consider linking constraints (e.g. telling another block what the storage level of lithium ion batteries was in the previous block). This is possible in PIPS-IPM++, but not the original PIPS-IPM (Hence also the name suffix of ++).
For the people not so familiar with HPC and MPI: High Performance Computers (HPC) are typically optimized to have a large CPU performance – or nowadays GPU and or both of them. This means you have a lot of machines but each machine has relatively low memory available (typically in a range of 100-200GB) which cannot fit the optimization matrix of large scale energy system models. This would be shared memory since one application runs on one machine accessing one block of memory. PIPS-IPM++ now allows using multiple machines at the same time, each machine now only has assess to it’s individual memory, however also only needs to store a part of the optimization matrix. This is now distributed memory, since one application uses the memory of multiple compute nodes. On the other hand, this also means the application itself has to run on multiple machines, but also benefits from more CPU – this is achieved by the Message Passing Interface (MPI) which allows communication between nodes in a single application. The neat part now is we can build as many blocks as we like and distribute them to compute nodes. Having this flexibility allows us in theory to just increase the number of blocks if memory or time constraints are an issue – in the real world you have other effects which can prevent this such as communication overhead or buggy libraries…
Hope this helps explain the underlying problem a bit better.
You received this message because you are subscribed to the Google Groups "openmod initiative" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openmod-initiat...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/openmod-initiative/20210716084449.BF5F736F92_F14701B%40SPMA-02.tubit.win.tu-berlin.de.