--
You received this message because you are subscribed to the Google Groups "simmer-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to simmer-devel+unsubscribe@googlegroups.com.
To post to this group, send email to simmer...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/simmer-devel/2bfe2522-a1cd-4baf-9d8c-686417fe0f0e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Hi Juan,
I’ve analysed the code you sent me privately. The conclusions may be valuable for someone else, so I’ll reply through the mailing list.
First of all, I’ve discarded any memory leaks (in the current version of simmer on CRAN). What happens is that your simulation is huge: you are generating a lot of data, so you need to follow an alternative approach.
mclapply creates new workers basically by duplicating the environment. So start with an environment as clean as possible. Ideally, you should launch a script containing only the steps needed with Rscript. If you need to play around with RStudio, for instance, clean your environment of unwanted variables with rm and call the garbage collector gc before running the simulation.wrap is a nice tool, but it cannot be used when the simulation environment grows huge. For each run, the best approach is to extract the data you need, edit the replication index and write the data to disk.An example of this:
library(simmer)
# other stuff
simulation <- function(i) {
des <- simmer()
# define trajectories
# add resources
# add generators
# run
# save arrivals to disk
A <- get_mon_arrivals(des, per_resource=T)
A$replication <- i
write.csv(A, file = paste0("arrivals_", i, ".csv"))
# save attributes to disk
B <- get_mon_attributes(des)
B$replication <- i
write.csv(B, file = paste0("attributes_", i, ".csv"))
# save resources to disk
C <- get_mon_resources(des)
C$replication <- i
write.csv(C, file = paste0("resources_", i, ".csv"))
}
mclapply(1:100, mc.preschedule=F, simulation)
# load data
# analyse data
This way, you should be able to run as many replications as you want (provided that the volume of data generated * number of cores of your computer stays below your RAM capacity, which in my case is true). Then, once the simulation is done, you’ll have the data for all the replicas on disk, and you’ll see whether it fits in RAM. If not, you’ll need special packages for the analysis part, but that’s another story…
Regards,
Iñaki
To unsubscribe from this group and stop receiving emails from it, send an email to simmer-devel...@googlegroups.com.
To post to this group, send email to simmer...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/simmer-devel/2bfe2522-a1cd-4baf-9d8c-686417fe0f0e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--