Hi,
> I am not able to compute steady states probabilities for a model which I
> think not very big
>
> When using the explicit engine, the model construction is fine
>
> Time for model construction: 184.819 seconds.
> Warning: Deadlocks detected and fixed in 232 states
> Type: DTMC
> States: 3254257 (1 initial) Transitions: 14794182
>
> But computing -ss failed due to lack of memory even with -javamaxmem 128g
>
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
That's definitely not a small model. However, generally that is a size
that might be possible to handle in that amount of RAM with the explicit
engine.
If you'd like, could you send me your PRISM file? I can have a look
which parts of the computation are actually failing, perhaps there's
potential for optimization in the computations.
> I changed the engine type to hybrid, however it even could not construct
> the model
> Command: prism -cuddmaxmem 8g -javamaxmem 128g -m -ss model7.test
>
>
>
> #
>
> # A fatal error has been detected by the Java Runtime Environment:
>
> #
>
> # SIGSEGV (0xb) at pc=0x00007f2a0d66e0b4, pid=62465,
> tid=0x00007f4be4057700
> ...
That looks like the symbolic MTBDD representation grows too large. You
might be able to get a smaller representation by playing a bit with the
order of variables and modules in the PRISM source, as that influences
the variable order used for building the symbolic representation.
If your model is somehow structured parametrically, i.e., you have some
related variant that is smaller (e.g., less modules, smaller variable
ranges,...), which can be successfully built using -m, then you might be
able to use variable reordering to automatically find a better variable
order. We have recently implemented some functionality for that, you can
find a version of PRISM with support for this at
https://wwwtcs.inf.tu-dresden.de/ALGI/PUB/STTT/
From the generated variable ordering of the smaller variant you might
be able to manually extrapolate to the larger variant.
> On the other hand, is it a common practice that one first constructs the
> model using explicit engine, export it and then model checks the
> imported model with another engine?
No, generally, if you export an explicit model and load that for using
the symbolic engines (mtbdd, hybrid, sparse), then you loose all
structural information and you will usually get a much more inefficient
symbolic representation of the model.
Cheers,
Joachim Klein