Dear all,
there seems to be a memory leak in canonical_label(...), using bliss.
Here is a test script to demonstrate the problem:
-----------------
import os, psutil
from sage.all import *
process = psutil.Process(os.getpid())
oldmem = process.memory_info().rss
for i in range(1000000):
G = graphs.RandomGNM(10,20)
canonG = G.canonical_label(algorithm='bliss')
# canonG = G.canonical_label(algorithm='sage')
if i%1000 == 0:
print(f"graph count {i}, mem usage (Delta) {process.memory_info().rss - oldmem}")
oldmem = process.memory_info().rss
--------
This uses up more and more memory if I use 'bliss' as algorithm:
On my machine around 260KB/1000 calls are lost.
Invoking garbage collection manually does not help.
I believe this might be a bug.
There is no memory leak using 'sage' as algorithm.
Info on my system:
MacBook Pro with M1 processor, macOS Monterey,
sage 9.4, bliss 0.73
Thanks a lot for your help,
Thomas
P.S.: The amount of memory lost in the sample script is small on an absolute scale, but our programs run for a week or two and we end up losing many GB.
The obvious bugfix is to use algorithm='sage', but this is twice slower than bliss.