I'm quite new to Dedalus.
While setting up a simple 1D system of nonlinear equations that are randomly forced, I've come across a weird issue with the forcing. To simplify, I use a General Function to define a forcing that could be time- or x- dependent and has a random amplitude that is close to 1, for example:
def my_forcing(*args):
s = 0.1; mu = 1.0
return np.random.normal(mu,s)
def Forcing(*args, domain=domain, F=my_forcing):
return de.operators.GeneralFunction(domain, layout='g', func=F, args=args)
de.operators.parseables['F'] = Forcing
and later add it to my system with an additional equation:
problem.add_equation("u = F()")
I integrate the system in time and record time t and u that is applied inside the solver. After that, I compare u with a test forcing u_test = myforcing() computed independently after the simulation. Of course, they are not expected to be the same since the forcing is random, but surprisingly, the standard deviation of u recorded in the simulation is much larger than that of the a posteriori u_test (see the figure attached). Based on the parameters of my forcing (s = 0.1; mu = 1.0), u_test seems to be correct, and u wrong. If the randomness is removed from the forcing, everything works just fine and u and u_test are identical.
What could have gone wrong here? I tested various variants of the forcing and came up with a minimal script that demonstrates the problem (also attached).
I would be grateful for any help on this!
Thanks,
Anna