import com.sparsity.sparksee.gdb.Database;
import com.sparsity.sparksee.gdb.Graph;
import com.sparsity.sparksee.gdb.Session;
import com.sparsity.sparksee.gdb.Sparksee;
import com.sparsity.sparksee.gdb.SparkseeConfig;
public class TestingSparksee {
public static void main(String[] args)
{
Database database =null;
String DB_FILE_PATH = "network_100N_180E.sparksee";
SparkseeConfig cfg = new SparkseeConfig();
Sparksee sparksee = new Sparksee(cfg);
try
{
database = sparksee.open(DB_FILE_PATH, true);
} catch (FileNotFoundException e1) {
e1.printStackTrace();
System.exit(-1);
}
while (true)
{
Session sess = database.newSession();
Graph g = sess.getGraph();
System.out.print(g.countEdges());
g.delete();
sess.close();
}
}
}
Hi,
Thank you for the heads up and your sample code to try it.
After a test it looks like the C++ core does not have a memory leak, instead is the java interface who might though have it because the memory consumption using java is growing exactly like you said.
This
is usually not a real memory leak because most of the memory allocated
for each object is in the C++ core and only references to it are kept in
java. So the java garbage collector may keep the java deleted objects
for a long time because they are really small objects. Specially if,
like in your example, the memory managed in java is not needed for
anything else.
But we will keep looking into this in more depth because we would prefer to disregard completely that is not a problem in the java API interface. Can you please tell us which java version, java memory options, Sparksee version, and operating system are you using?
Anyway, for performance reasons, in a real application, we would recommend to create only the number of sessions that you require (usually 1 for each work thread), and keep them alive for the whole execution (until just before closing the database) because the creation and destruction of sessions can be expensive (it depends on the operations used in the session).
Thanks.