Batch insertion is not transactional. If something goes wrong and you don't shutDown() your database properly, the database becomes inconsistent.
I had to interrupt one of the batch insertion tasks as it was taking time much longer than expected which left my database in an inconsistence state. I get the following message:
db_name store is not cleanly shut down
How can I recover my database from this state? Also, for future purposes is there a way for committing after importing every file so that reverting back to the last state would be trivial. I thought of git, but I am not sure if it would help for a binary file like index.db.
--
You received this message because you are subscribed to a topic in the Google Groups "Neo4j" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/neo4j/y7amc5GewrM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to neo4j+un...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
You received this message because you are subscribed to the Google Groups "Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email to neo4j+un...@googlegroups.com.
What is the actual issue you are running into?
What OS do you run it under?
Do you have enough heap assigned to your import process and configured the memory mapping settings appropriately?
Use a configuration that uses cache_type none and mmio settings with large values for the relationship-store and large-enough values for the node-store and property-store
E.g. 10% for node-and property-store and 80% for the relationship-store.
t
Responses inline.What is the actual issue you are running into?What OS do you run it under?I am running it on Ubuntu 13.04.
Do you have enough heap assigned to your import process and configured the memory mapping settings appropriately?I assigned 3 GB. The file has about 10M relationships, and 3M nodes. This translates to about 1G of heap space from the link that you sent to me. Also, this is done on an existing graph. Should I count the nodes in the graph too?
Make sure to have decent disk and the disk scheduler set to noop or deadline, see:Don't do the barrier=0 though as this is not save, and would only make sense for large imports to mount the disk that way only during the import itself.
That's tiny and should not take more than a minute.Do you have enough heap assigned to your import process and configured the memory mapping settings appropriately?I assigned 3 GB. The file has about 10M relationships, and 3M nodes. This translates to about 1G of heap space from the link that you sent to me. Also, this is done on an existing graph. Should I count the nodes in the graph too?How large is the existing db?
Can you show the output from the import run?
The batch-inserter slowdown is a regression in 2.0.0 and currently worked on.How often do you have to run the import?
--
You received this message because you are subscribed to a topic in the Google Groups "Neo4j" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/neo4j/y7amc5GewrM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to neo4j+un...@googlegroups.com.