Apparently the group has been quiet for quite some time - hopefully people are still reading!
I'm using tinypy in a game where a simulated CPU runs multiple processes, each of which is a running tinypy VM. Tinypy was well suited for this in particular because of it's step-by-step running of the python byte-code. My modifications allow the simulated CPU to run n-steps of each VM in a round-robin approach, simulating running processes that are time-slicing the CPU. I've extended the tinypy built-ins to include message passing routines, so the processes can communicate with each other. And tinypy has been performing just great!
I'm getting quite familiar with v1.1, but have also recently downloaded the "cutting edge" code that includes a few fixes. But what I don't quite understand in either version is the GC implementation, or frankly, philosophy. I haven't spent much time in this area of the code (yet) because it seems to work. Although I have replaced malloc/free calls with tracking routines that allow the simulated CPU to know how much memory each process is taking.
My question is, simply, when is garbage collection done? I haven't actually seen garbage collection occurring until a VM is deinit()'d. When it is deinit()'d all of the memory is freed, which is nice. But is there a time during the run of the python byte-code that garbage collection (memory freeing) is done?
I've noted in the "cutting edge" download that the amount of memory is also being tracked in the "sandbox" which is doing pretty-much what my code did to monitor memory usage.