neo4j server can easily be crashed by not optimized query?

39 views
Skip to first unread message

kincheong lau

unread,
Apr 18, 2016, 5:50:00 AM4/18/16
to Neo4j
we encounter problems when some of our developers run a badly optimized query:
1. the neo4j server will hang and no response
2. there's no way to kill the long running query
3. we usually have no choice but force restart neo4j
4. we have applied indexes but seems does not help much when we have large data volume

we are using community version, any server configuration we could try to avoid this performance issue?

Michael Hunger

unread,
Apr 18, 2016, 12:34:00 PM4/18/16
to ne...@googlegroups.com
You should be able to abort that long running query by terminating the transaction:

1. click the (x) in the neo4j browser
2. press ctrl-c if you use Neo4j shell
3. if you run the statements programmatically, create a tx (embedded or remote) and then call tx.terminate() from another thread.

what was the query?

you have to share more detail if you want help with query optimization (datamodel, query, existing indexes, machine config (or graph.db/messages.log) )

Michael


--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email to neo4j+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

kincheong lau

unread,
Apr 18, 2016, 9:28:50 PM4/18/16
to Neo4j
in the neo4j browser, i usually got 'discounted to server...' error before i realize there's a problem with the query, and it slows down the server already even i click (x).

yesterday i was trying to delete a relationship, there are 8,xxx of SUBSCRIBE_TO relationship, but i need to break down using LIMIT 100 to prevent server hang.

match (reader)-[u:SUBSCRIBE_TO]->(book)
with reader, u, book 
delete u;

Michael Hunger

unread,
Apr 19, 2016, 2:20:31 AM4/19/16
to ne...@googlegroups.com
1. Use labels

you don't need the WITH (except for LIMIT batching)

and you should be able to delete up to 1M entities per transaction with a heap size of 4G

so you can use LIMIT 1000000

Michael

kincheong lau

unread,
Apr 19, 2016, 5:49:00 AM4/19/16
to Neo4j
I read through some performance tuning docs, would it help if I optimize below parameters? how large shall I configure?

dbms.pagecache.memory
node_cache_size
relationship_cache_size

the goal we are trying to achieve is even when there's a heavy loaded query running for 1 user, it does not impact other users' access to neo4j (kind of the shared pool stuff?)
Reply all
Reply to author
Forward
0 new messages