Sebastian Funk
unread,May 16, 2017, 7:58:24 AM5/16/17Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Sohrab Salehi, LibBi Users
Hi Sohrab,
Can you provide a reproducible example? The model you quote does not compile as it stands.
Seb.
Sohrab Salehi wrote:
> Hi,
>
> I'm facing what is most probably an overflow problem. To impose boundary
> conditions, in the transition block,
> I assign the updated value of the state variable to a temp variable,
> declared via inline.
> But this produces NaNs after some timestep (say t_s) in simulating from the
> model,
> where combining the result of the temp variable at t_s with any number via
> any mathematical operation
> creates NaNs.
> When the same boundary condition is implemented using additional dummy STATE
> variable, it works as expected.
> This workaround would have been okay if it didn't lead to a doubling of the
> states to be inferred.
> I've made sure that enable-single is set to false.
>
> The model looks as follows:
> model someModel {
> dim k(size = 2)
> dim i(size = 3)
> const h = 0.001 // step size
> const n = 200
> const epsilon = 0.05 // observation error tolerance
> const xSum0_1 = 0;
> noise deltaW[k] // wienr process steps
> param x0[i] // workaround for starting value of x
> state x[k]
> obs y[k]
>
> sub parameter {
> x0[i] ~ gamma()
> }
>
> sub initial {
> x[k] <- x0[k]/(x0[0] + x0[1] + x0[2])
> }
>
> sub transition(delta = h) {
> deltaW[k] ~ wienr()
>
> inline xSum0_1 = 0;
> inline x_new0_1 = x[0] + (-.2*x[0] - .3*x[1] + .2)*n*x[0]*h + sqrt((1 - x
> [0])*x[0])*deltaW[0];
> inline x_new0_2 = min(1-xSum0_1, min(1, max(x_new0_1, 0)));
> inline xSum1_1 = xSum0_1 + x_new0_2;
> inline x_new1_1 = x[1] + (-.2*x[0] - .3*x[1] + .3)*n*x[1]*h + sqrt((1 - x[0]
> - x[1])*x[1]/(1 - x[0]))*deltaW[1] -
> sqrt((1 - x[0])*x[0])*deltaW[0]*x[1]/(1 - x[0]);
> inline x_new1_2 = min(1-xSum1_1, min(1, max(x_new1_1, 0)));
> inline xSum2_1 = xSum1_1 + x_new1_2;
> x[0] <- x_nw0_2;
> x[1] <- x_nw1_2;
> }
>
> sub observation {
> y[k] ~ uniform(x[k]-epsilon, x[k]+epsilon)
> }
> }
>
> I was wondering if you can think about any solutions?
>
> I'm using LibBi 1.3.0 and running the model using RBi.
>
> Thanks,
> Sohrab