I noticed a memory leak while running my algorithm over all dataset in one job. After investigating, I think I have identified that it was due to memory built up after calling the get_linear_acquisition_model function.
I tried computing the forward projection of the linear model with the following options:
acq_linear = self._acq_models[i].get_linear_acquisition_model()
fwd = acq_linear.forward(self._x_one)
del acq_linear
fwd = self._acq_models[i].forward(self._x_one) - self._acq_models[i].get_constant_term()
The figure bellow shows the memory usage (after running a few dataset) of the two options. The purple one shows option one and the red one shows option 2.
Screenshot.2026-02-13.173404.png (view on web)—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@evgueni-ovtchinnikov how does get_linear_acquisition_model() handle shared ownership?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
how does get_linear_acquisition_model() handle shared ownership?
@KrisThielemans underlying C++ code just copies some shared pointers, no data allocated.
The reason for the memory leak could be the absence of __del__ in SIRF's base class AcquisitionModel.
@hsw43 can you please add to class AcquisitionModel (in SIRF file STIR.py) this method:
def __del__(self):
if self.handle is not None:
pyiutil.deleteDataHandle(self.handle)
and then build SIRF and repeat your memory leak tests?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
Thanks @evgueni-ovtchinnikov.
@hsw43 I'm not sure if you have time for this, and rebuilding SIRF is probably out of the question as the docker image doesn't contain the necessary files. However, you can just edit the SIRF.py in your system (if you cannot find it, in python, something like this can work "import sirf.STIR; sirf.STIR.file").
However, @evgueni-ovtchinnikov it shouldn't be too hard to replace this yourself. Just put a loop around a all of the code above (adding an actual call to acq_model.forward) and monitor memory via a system monitor, or possibly
import psutil
ram = psutil.virtual_memory()
print("RAM used (GB):", round(ram.used / 1e9, 2))
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
I do not have self._acq_models[i] and self._x_one I am afraid.
Besides, rebulding SIRF on our SIRF VM after missing __del__ is added should just copy STIR.py to ~/devel/install/python/sirf.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
I do not have
self._acq_models[i]andself._x_oneI am afraid.
the memory leak should happen with any acq_model as long as there's a constant term. x_one is just any image (actually, filled with 1, but not important for this test)
Besides, rebulding SIRF on our SIRF VM after missing
__del__is added should just copySTIR.pyto~/devel/install/python/sirf.
Sure but I doubt @hsw43 is using the VM (our latest is 3.8 anyway). Fairly sure he was using the petric2 docker image (which doesn't contain sources).
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
Sorry for the delay.
I actually build SIRF following this (with Kris's help)
I have found STIR.py and added the __del__ method to the AcquisitionModel class.
I wonder if I need to rebuild SIRF before running my memory tests? Or is it fine to run them directly?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
great. You don't need to rebuild SIRF, but you then need to be careful which SIRF.py you modify. (the one in sources/SIRF isn't used by Python). Obviously, you can just check the class definition to see if everything is fine.
thanks!
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@hsw43 On SIRF VM I do this:
sirfuser@vagrant:~$ cd devel/buildVM/builds/SIRF/build/
sirfuser@vagrant:~/devel/buildVM/builds/SIRF/build$ make install
Since you just edited a Python script SIRF.py, this actually simply copies this script to ~/devel/install/python/sirf.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
Just ran a quick test with the 2 most memory intensive datasets (Vision600_ZrNEMA and Vision600_Hoffman). I was able to run the Hoffman after ZrNEMA (which I could not if I used get_linear_acquisition_model() without adding the __del__ method), might be a good sign.
However, it seems that a small portion of memory is still in used. Need to run on all dataset to confirm.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
I just rerun SPDDY on all the datasets. I was able to run everything with one call.
Although there is a small discrepancy of available memory going from one dataset to another, I think the memory leak due to calling get_linear_acquisition_model has been fixed.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
Excellent. Thanks a lot for testing!
@evgueni-ovtchinnikov please create a PR with the fix (and merge it :-))
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()