Memory issue in sparksee

28 views
Skip to first unread message

s thomas

unread,
Sep 5, 2017, 2:12:02 AM9/5/17
to Sparksee
Hello,
 I have written a simple code which invokes a session, prints the number of edges in the session graph, and closes the session. When the code is run within an infinite loop, I find a steady creep in the memory consumption, which indicates that some resources are not been closed. Please help me to identify the problem. The code is given below

import com.sparsity.sparksee.gdb.Database;

import com.sparsity.sparksee.gdb.Graph;

import com.sparsity.sparksee.gdb.Session;

import com.sparsity.sparksee.gdb.Sparksee;

import com.sparsity.sparksee.gdb.SparkseeConfig;

 

public class TestingSparksee {

 public static void main(String[] args)

{

Database database =null;

String DB_FILE_PATH = "network_100N_180E.sparksee";

SparkseeConfig cfg = new SparkseeConfig();

Sparksee sparksee = new Sparksee(cfg);

 

try

{

database = sparksee.open(DB_FILE_PATH, true);

   } catch (FileNotFoundException e1) {

e1.printStackTrace();

System.exit(-1);

}

while (true)

{

Session sess = database.newSession();

Graph g = sess.getGraph();

System.out.print(g.countEdges());

g.delete();

sess.close();

}

}

}

c3po.ac

unread,
Sep 5, 2017, 8:42:18 AM9/5/17
to Sparksee

Hi,


Thank you for the heads up and your sample code to try it.


After a test it looks like the C++ core does not have a memory leak, instead is the java interface who might though have it because the memory consumption using java is growing exactly like you said.


This is usually not a real memory leak because most of the memory allocated for each object is in the C++ core and only references to it are kept in java. So the java garbage collector may keep the java deleted objects for a long time because they are really small objects. Specially if, like in your example, the memory managed in java is not needed for anything else.


But we will keep looking into this in more depth because we would prefer to disregard completely that is not a problem in the java API interface. Can you please tell us which java version, java memory options, Sparksee version, and operating system are you using?


Anyway, for performance reasons, in a real application, we would recommend to create only the number of sessions that you require (usually 1 for each work thread), and keep them alive for the whole execution (until just before closing the database) because the creation and destruction of sessions can be expensive (it depends on the operations used in the session).


Thanks.



El dimarts, 5 setembre de 2017 8:12:02 UTC+2, s thomas va escriure:
Reply all
Reply to author
Forward
0 new messages