C stack size limits

1,877 views
Skip to first unread message

Chris Carleton

unread,
Oct 11, 2017, 2:10:33 PM10/11/17
to nimble-users
Hi All,

I'm running Mac OS X on a 2012 Mac Air with 8GB RAM and I've run into the stack size limit with a nimble model. I'm thinking about changing the default limit for the whole system so that I can run my model, but I was wondering if anyone else had bumped up against that limit and, more importantly, the strategies people are using to deal with it. If possible, I'd like to do something impermanent. I also don't have a lot of experience with deep OS-level C/C++ programming and I'd really like not to destroy my system in the process.

I had a quick look through the Nimble repo to see if I could figure out a workaround involving only Nimble and maybe changing something at compile time for a given model, but I couldn't find anything and I'm not skilled enough with C/C++ to make much sense of the compiling stuff anyway.

Thanks,

Chris

PS Error and Cstack_info():

Error: C stack usage  7969688 is too close to the limit
> Cstack_info()
      size    current  direction eval_depth
   
7969177       4136          1          2  

Perry de Valpine

unread,
Oct 11, 2017, 2:57:38 PM10/11/17
to Chris Carleton, nimble-users
Hi Chris,

There are a variety of reasons your RAM could be getting used so heavily.  NIMBLE does end up using sometimes surprisingly large amounts of memory in the steps of building and compiling models and MCMCs.  Some of the issue has to do with R.  It's something we're working on, and if that's really the problem I can suggest some strategies.  But the simplest thing to try would be to thin your MCMC sample.  If you are asking for a very large number of MCMC iterations and have monitors on many dimensions, then the posterior output can be enormous.  If you using a thinning interval (thin argument to configureMCMC), that can reduce memory use dramatically.

Let me know if that helps.

Perry


--
You received this message because you are subscribed to the Google Groups "nimble-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nimble-users+unsubscribe@googlegroups.com.
To post to this group, send email to nimble...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/nimble-users/c36d217e-a7f9-4228-a489-56420a595101%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

David Pleydell

unread,
Oct 12, 2017, 1:26:06 AM10/12/17
to nimble-users
One strategy that can help reduce setup / compilation time is to reduce the number of nodes by writing functions that permit more complex node definitions. For example, for a state space model, instead of having one node for every numeric value, just use one node for the entire X vector and one node for the entire Y vector. This can dramatically reduce CPU time for setting up the model, but how much memory it can save I can't say.

Chris Carleton

unread,
Oct 12, 2017, 4:32:18 PM10/12/17
to nimble-users
Thanks Perry and David!

I tried Perry's solution to no avail. I'll see about David's solution—not sure whether there are any nodes I can collapse in that way (see attached), but I'll give it a go. Otherwise, I'm happy to try and increase the memory limit if I can figure out how to do it safely without affecting my whole system (just in case some other program is misbehaving and I don't know it). If that doesn't work, I might be at an impasse.

Thanks again for all the help!

Chris
nimble_pois_v15b.R
pois_data_v15b.R

Duncan Temple Lang

unread,
Oct 12, 2017, 5:22:36 PM10/12/17
to Chris Carleton, nimble-users
Hi Chris

To benignly increase the stack limit, you can do the following:
Before running R, in a terminal, run the three commands
   ulimit -s
   ulimit -s 16394
   ulimit -s
to query the current stack limit, then set it and then check it took.

Assuming the stack limit is now larger, run R from within that console and run the nimble code.

HOWEVER, it is very curious that the stack – rather than the heap – is growing so large.
That typically occurs when there is a recursive function OR large data objects are being allocated on the stack.
Perry et al., is the generated C++ code creating large objects or very, very many objects  on the stack?

D.





--
You received this message because you are subscribed to the Google Groups "nimble-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nimble-users...@googlegroups.com.

To post to this group, send email to nimble...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.
--
Duncan Temple Lang
Director, Data Science Initiative
Professor of Statistics

Perry de Valpine

unread,
Oct 12, 2017, 6:16:03 PM10/12/17
to Duncan Temple Lang, Chris Carleton, nimble-users
Chris,

Can you send the code for the (presumably) scaled-up problem that is eating up so much memory?

Perry

On Thu, Oct 12, 2017 at 2:22 PM, Duncan Temple Lang <dtemp...@ucdavis.edu> wrote:
Hi Chris

To benignly increase the stack limit, you can do the following:
Before running R, in a terminal, run the three commands
   ulimit -s
   ulimit -s 16394
   ulimit -s
to query the current stack limit, then set it and then check it took.

Assuming the stack limit is now larger, run R from within that console and run the nimble code.

HOWEVER, it is very curious that the stack – rather than the heap – is growing so large.
That typically occurs when there is a recursive function OR large data objects are being allocated on the stack.
Perry et al., is the generated C++ code creating large objects or very, very many objects  on the stack?

D.




On Thu, Oct 12, 2017 at 1:32 PM Chris Carleton <w.cca...@gmail.com> wrote:
Thanks Perry and David!

I tried Perry's solution to no avail. I'll see about David's solution—not sure whether there are any nodes I can collapse in that way (see attached), but I'll give it a go. Otherwise, I'm happy to try and increase the memory limit if I can figure out how to do it safely without affecting my whole system (just in case some other program is misbehaving and I don't know it). If that doesn't work, I might be at an impasse.

Thanks again for all the help!

Chris


On Wednesday, October 11, 2017 at 10:26:06 PM UTC-7, David Pleydell wrote:
One strategy that can help reduce setup / compilation time is to reduce the number of nodes by writing functions that permit more complex node definitions.  For example, for a state space model, instead of having one node for every numeric value, just use one node for the entire X vector and one node for the entire Y vector. This can dramatically reduce CPU time for setting up the model, but how much memory it can save I can't say.

--
You received this message because you are subscribed to the Google Groups "nimble-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nimble-users+unsubscribe@googlegroups.com.

To post to this group, send email to nimble...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/nimble-users/5783da01-38c7-40c3-bc59-b0fe1cf275c6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
Duncan Temple Lang
Director, Data Science Initiative
Professor of Statistics

--
You received this message because you are subscribed to the Google Groups "nimble-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nimble-users+unsubscribe@googlegroups.com.

To post to this group, send email to nimble...@googlegroups.com.

Chris Carleton

unread,
Oct 12, 2017, 6:57:52 PM10/12/17
to nimble-users
Hi Perry and Duncan,

Thanks for the quick replies.

Find attached the script that produced the error and an R data file. The datafile just needs to be loaded into R as usual and contains only one dataframe—note that some columns in the dataframe contain NA values, but I'm not running the script using those columns in the model right now.

C

To unsubscribe from this group and stop receiving emails from it, send an email to nimble-users...@googlegroups.com.

To post to this group, send email to nimble...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/nimble-users/5783da01-38c7-40c3-bc59-b0fe1cf275c6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
Duncan Temple Lang
Director, Data Science Initiative
Professor of Statistics

--
You received this message because you are subscribed to the Google Groups "nimble-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nimble-users...@googlegroups.com.

To post to this group, send email to nimble...@googlegroups.com.
nimble_pois_v14_maya.R
maya_data.RData

Perry de Valpine

unread,
Oct 13, 2017, 4:40:06 PM10/13/17
to Chris Carleton, nimble-users
Chris,

I'm not able to reproduce the painful memory use you are reporting.

I am encountering a different issue, which is that configureMCMC violates R's limit on recursion depth because of the (non-conditionally independent) format of the model.  Daniel, if you are reading this, perhaps we should impose a fairly small limit of recursion depth during conjugacy checking and simply punt on conjugacy if that is violated?  Chris, please try "useConjugacy = FALSE" in your call to configureMCMC and see if that helps.

It is also worth updating to the current development version of nimble and see if that helps.  To so do, use

library(devtools)
install_github("nimble-dev/nimble", ref = "devel", subdir = "packages/nimble")

(There are a few extraneous diagnostic messages that you may see while running from this version.)

At some points during development we have found large transient memory allocations during model setup due to vectorized calculations in R that exploded in size for some kinds of large models.  We fixed those by reducing how those steps were done.  I don't think any of those changes were recent enough to explain what you are encountering, but it is conceivable.

Finally, if you write your model in the way we've discussed on other threads, making lambda[j] or loglambda[j] stochastic, that should reduce memory usage.  I don't think the reduction will be enough to account for the problem you are reporting, but I am not sure.  The reduction will occur because the graph dependencies will not grow into the hundreds for lambda[j] for large j.  Also the model should run faster for the same reason.

Perry




To unsubscribe from this group and stop receiving emails from it, send an email to nimble-users+unsubscribe@googlegroups.com.

To post to this group, send email to nimble...@googlegroups.com.

Chris Carleton

unread,
Oct 13, 2017, 9:40:16 PM10/13/17
to nimble-users
Hi All,

I reinstalled nimble from source (dev, as per Perry's suggestion above). The problem seems resolved. There's still a lot of memory being used, but it's not raising any errors and it's completing. So, from my perspective it's resolved.

Thanks!

C
Reply all
Reply to author
Forward
0 new messages