Hi Henry,
This is a tricky issue. We have built two features to attempt to reduce memory use, but still it seems that objects evade R's garbage collection. This may be because we have a lot of reference class objects and environments, creating potentially closed loops of referenced objects. The two features are:
nimbleOptions(clearNimbleFunctionsAfterCompiling = TRUE) # This can modestly reduce memory use
nimble:::clearCompiled(model) # This attempts to clear all compiled content for the project related to model and to unload the on-the-fly compiled shared library used for it.
However I tried both of these and they don't resolve the issue you're reporting. It's something we'll have to look into more. We have worked on this in the past but this doesn't look like good behavior.
Here are some potential workarounds.
You could within each loop use system2() to call Rscript and launch a self-contained process. This would make sense if you really need to do the full nimble building and compilation each time.
If you are really re-using the same model structure, you could build and compile just once and then simply re-assign data values. e.g.
Cmodel$y <- some_other_values
then re-run your already-compiled MCMC.
If in your simulations you need models of different sizes or different setups of what is data (observed vs unobserved), it gets trickier but it is still possible to build and compile just once and re-use those objects. For example you can configure-build-compile a full set of samplers for all nodes (including, atypically, data nodes) and then control the sampler order in a particular run of the MCMC to include some samplers and omit others. In that way, you can make some nodes handled like data and some like unobserved on a run-by-run basis. If that sounds like something you need and it is too imprecisely described here, please holler again and we could go into more detail.
Will one of those approaches help you?
-Perry