Let's assume you have a property R{"r"}=? [ F "target" ], which computes
the expected cumulated amount of reward "r" until "target" is reached.
Split your reward structure "r" into two separate reward structures,
"pos" and "neg", containing the positive and negative rewards,
respectively. For the latter, negate them so that rewards in "neg" are
also positive. Then use the property:
R{"pos"}=? [ F "target" ] - R{"neg"}=? [ F "target" ]
and this will give you what you want. It will only work if poth parts
are finite, i.e. if "target" is eventually reached with probability 1.
Dave
On 19/11/2017 19:26,
dine...@gmail.com wrote:
> Hi Dave
>
> I have a DTMC model, it will be great if PRISM allow the reward mixture
> in DTMC atleast.
>
> On Sunday, November 19, 2017 at 7:11:37 PM UTC, d.a.parker wrote:
>
> Thinking about it, we could potentially add support for Markov chains,
> but I don't know a good general algorithm to tackle this class of
> problem for Markov decision processes. What model do you have?
>
> Failing that, an alternative, if your state space is not too big, is to
> add another variable to your model that counts the reward explicitly.
> This will require putting a bound on the max (and min) values that the
> counter can take.
>
> Dave
>
> > an email to
prismmodelchec...@googlegroups.com <javascript:>
> > <mailto:
prismmodelchec...@googlegroups.com
> <javascript:>>.
> <javascript:>
> > <mailto:
prismmod...@googlegroups.com <javascript:>>.
> <
https://groups.google.com/group/prismmodelchecker>.
> <
https://groups.google.com/d/optout>.
>
> --
> You received this message because you are subscribed to the Google
> Groups "PRISM model checker" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to
prismmodelchec...@googlegroups.com
> <mailto:
prismmodelchec...@googlegroups.com>.