I just noticed this thread because of your recent reply, and happened to read through. (I haven't regularly read sage-devel for a while.)
As to your original email: I think there is a subtle python memory management issue there. If you run
sage: BIG=myfunction(somevars)
sage: BIG=myfunction(somevars)
then on the second invocation of the function, I'm pretty sure that the way Python works, it calculates the result of the function call and then assigns it to the variable BIG. In between, the garbage collector will probably run sometimes, but because the variable BIG has not yet been reassigned, the garbage collector might not clean it up. So it seems reasonable to me that
sage: BIG=myfunction(somevars)
sage: BIG = 0
sage: BIG=myfunction(somevars)
may behave differently.
Having said all that... It doesn't sound right that running the function once costs %50 of ram., and running it twice (with the BIG = 0) in between, costs 75%. However, there are certainly situations where that can happen. As was mentioned, Sage caches some computations, and that can occasionally lead to unwanted memory use. Additionally, when running this sort of short test, it seems a good idea to manually invoke the python garbage collector (import gc; gc.collect()) before conclusively declaring that there is a memory leak.
The _best_ way to help (and get help) and to get attention, if there is really a memory leak, is to write a short loop that looks something like
while 1:
x = some_simple_function()
gc.collect()
print get_memory_usage()
and outputs an increasing sequence of numbers.
Going from some complicated code to a simple loop like that may be an arduous debugging task in itself, and is something I would consider a valuable service to Sage if it really finds a bug. In the intermediate regime, just sharing some code could be useful, if you are willing and able. There are at least a few people (such as myself, during the occasionally periods while I am paying attention) with >4 GB of ram and 10 minutes of cpu cycles to spare, who may be willing to help.
Finally (and this is the reason that I read through this thread and replied), there was a change in the way that Sage manages PARI memory usage (between 7.0 and 7.1, I think. See
https://trac.sagemath.org/ticket/19883) which probably affects a very small number of users, but affects them very badly. (I know about this because it affects me.) If on your machine with 100 GB of ram, the output of 'cat /proc/sys/vm/overcommit_memory' is 2, then it affects you. Alternatively, if overcommit_memory is 0, then it is possible you are misreading the memory usage: the virtual memory usage will be high, but not the actual memory usage. The problem will hopefully be fixed by 7.4 (see
https://trac.sagemath.org/ticket/21582), but the high virtual memory usage confusion will probably persist. Of course, it is also quite possible that you've found some other bad problem that popped up between 7.0 and 7.1.