regarding multi objective properties

10 views
Skip to first unread message

Mridu Nanda

unread,
Mar 14, 2024, 12:31:32 AMMar 14
to PRISM model checker
Hi,

I have a model that is the following:

const ALIVE = 0;
const WIN_ZERO = 1;
const WIN_ONE = 2;
const CAUGHT = 8;
const DONE = 9;
const double CAUGHT_PROB;

module MAIN
s: [ALIVE..DONE] init ALIVE;

[] s = ALIVE -> 1  : (s' = WIN_ZERO) + 0  : (s' = CAUGHT);
[] s = ALIVE -> 0.375 : (s' = WIN_ONE) + 0.625  : (s' = CAUGHT);


[] s = WIN_ZERO -> (s' = DONE);
[] s = WIN_ONE -> (s' = DONE);
endmodule


rewards
[] s = WIN_ZERO :  0 ;
[] s = WIN_ONE  :  1 ;
endrewards

I want to verify the max reward given that the CAUGHT_PROB must stay under 0.5. Logically, I feel like this should be 0, since the first action is the only one an adversary can take subject to the caught probability constraint, and that gives 0 reward. 

However, when I verify: multi(Rmax=? [ C ], P<=CAUGHT_PROB[F s=CAUGHT]), I get 0.3. 

I would really appreciate some help in understanding this result!

Thanks (this forum has been very helpful),
Mridu
test.props
test.prism

Dave Parker

unread,
Mar 14, 2024, 5:48:13 AMMar 14
to prismmod...@googlegroups.com, Mridu Nanda
Hi Mridu,

The optimal policies for multi objective queries are a bit more complex,
in particular they can be randomised. The optimal behaviour in order to
keep the probability of reaching CAUGHT at most 0.5 is to, in state 0,
pick the first choice with probability 0.2 and the second with
probability 0.8. That gives a probability of reaching CAUGHT of
0.8*0.625 = 0.5 and the reward is 0.8*0.375 = 0.3.

You were reasoning over optimal deterministic policies only I think,
which is not something that PRISM's multi-objective model checking supports.

We recently updated PRISM's support for generating/viewing optimal
policies/strategies, but multi-objective queries have not yet been
adapted to this, so to get hold of the (randomised) policies, you
currently need to use the old functionality. This:

prism model.prism model.props -const CAUGHT_PROB=0.5 -lp -exportadvmdp
adv.tra -exportprodates adv.sta

gives the optimal policy induced model in adv.tra, and its state space
is illustrated in adv.sta.

Hope that helps,

Dave
> --
> You received this message because you are subscribed to the Google
> Groups "PRISM model checker" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to prismmodelchec...@googlegroups.com
> <mailto:prismmodelchec...@googlegroups.com>.
> To view this discussion on the web, visit
> https://groups.google.com/d/msgid/prismmodelchecker/042b895d-b7f9-4ec3-99f4-ae3ff1a201fdn%40googlegroups.com <https://groups.google.com/d/msgid/prismmodelchecker/042b895d-b7f9-4ec3-99f4-ae3ff1a201fdn%40googlegroups.com?utm_medium=email&utm_source=footer>.
Reply all
Reply to author
Forward
0 new messages