Computing graph deltas and applying update/delete routines is something I'm still trying to wrap my head around. But I was working on this same problem recently - figured I'd give a suggestion and see what people think.
The nodes and edges I'm working with contain sets of metadata with varying degrees of complexity.
Ex: for edges I capture things like "certainty" (double), "source" (string), "creator" (string), etc. It's one thing to see if an edge exists between two node IDs. It gets more complicated when you also want to consider these values associated with each edge. Perhaps I find the same relationship between the exact same nodes, but from different sources with different weights. They're not equivalent in this case and I want to capture both.
I basically do an MD5 hash of the edge metadata structure and append an "e_md5" property to it at the time of edge creation (also capture timestamps for when it was created and updated). If/when this relationship is seen a second time for that given source, if the MD5 hash is the same - it considers it unchanged from the previous load and does not re-add the edge.
Maybe there's something equivalent already implemented that I'm not aware of and I'm reinventing the wheel. It's probably more taxing on storage, and I have yet to see how well it scales on extremely large graphs.
Just a thought - interested in any feedback.
Thanks!
Tim