java.lang.OutOfMemoryError: GC overhead limit exceeded when running 'neo4j-admin check-consistency' - Any ideas ?

223 views
Skip to first unread message

unreal...@googlemail.com

unread,
Feb 26, 2017, 2:28:16 PM2/26/17
to Neo4j
The following o/p was obtained:

.
.
.

....................  90%
2017-02-26 00:03:16.883+0000 INFO  [o.n.c.ConsistencyCheckService] === Stage7_RS_Backward ===
2017-02-26 00:03:16.885+0000 INFO  [o.n.c.ConsistencyCheckService] I/Os
RelationshipStore
  Reads: 3374851294
  Random Reads: 2743390177
  ScatterIndex: 81

2017-02-26 00:03:16.886+0000 INFO  [o.n.c.ConsistencyCheckService] Counts:
  10338005177 skipCheck
  1697668360 missCheck
  5621138678 checked
  10338005177 correctSkipCheck
  1688855306 skipBackup
  3951022795 overwrite
  2247865 noCacheSkip
  239346598 activeCache
  119509521 clearCache
  2429587416 relSourcePrevCheck
  995786837 relSourceNextCheck
  2058354842 relTargetPrevCheck
  137409583 relTargetNextCheck
  6917470274 forwardLinks
  7991190672 backLinks
  1052730774 nullLinks
2017-02-26 00:03:16.887+0000 INFO  [o.n.c.ConsistencyCheckService] Memory[used:1.09 GB, free:1.07 GB, total:2.17 GB, max:26.67 GB]
2017-02-26 00:03:16.887+0000 INFO  [o.n.c.ConsistencyCheckService] Done in  1h 36m 37s 219ms
.........2017-02-26 00:23:26.188+0000 INFO  [o.n.c.ConsistencyCheckService] === RelationshipGroupStore-RelGrp ===
2017-02-26 00:23:26.189+0000 INFO  [o.n.c.ConsistencyCheckService] I/Os
NodeStore
  Reads: 231527337
  Random Reads: 228593774
  ScatterIndex: 98
RelationshipStore
  Reads: 420334193
  Random Reads: 143404207
  ScatterIndex: 34
RelationshipGroupStore
  Reads: 409845841
  Random Reads: 105935972
  ScatterIndex: 25

2017-02-26 00:23:26.189+0000 INFO  [o.n.c.ConsistencyCheckService] Counts:
2017-02-26 00:23:26.190+0000 INFO  [o.n.c.ConsistencyCheckService] Memory[used:751.21 MB, free:1.29 GB, total:2.02 GB, max:26.67 GB]
2017-02-26 00:23:26.191+0000 INFO  [o.n.c.ConsistencyCheckService] Done in  20m 9s 303ms
Exception in thread "ParallelRecordScanner-Stage8_PS_Props-11" java.lang.OutOfMemoryError: GC overhead limit exceeded
at org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.getFrame(OrdsSegmentTermsEnum.java:131)
at org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.pushFrame(OrdsSegmentTermsEnum.java:158)
at org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.seekExact(OrdsSegmentTermsEnum.java:391)
at org.apache.lucene.index.TermContext.build(TermContext.java:94)
at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:192)
at org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
at org.apache.lucene.search.ConstantScoreQuery.createWeight(ConstantScoreQuery.java:119)
at org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
at org.apache.lucene.search.BooleanWeight.<init>(BooleanWeight.java:57)
at org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:239)
at org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
at org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:887)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:535)
at org.neo4j.kernel.api.impl.schema.reader.SimpleIndexReader.countIndexedNodes(SimpleIndexReader.java:136)
at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.verifyNodeCorrectlyIndexed(PropertyAndNodeIndexedCheck.java:171)
at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.checkIndexToLabels(PropertyAndNodeIndexedCheck.java:113)
at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.check(PropertyAndNodeIndexedCheck.java:71)
at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.check(PropertyAndNodeIndexedCheck.java:48)
at org.neo4j.consistency.report.ConsistencyReporter.dispatch(ConsistencyReporter.java:124)
at org.neo4j.consistency.report.ConsistencyReporter.forNode(ConsistencyReporter.java:440)
at org.neo4j.consistency.checking.full.PropertyAndNode2LabelIndexProcessor.process(PropertyAndNode2LabelIndexProcessor.java:63)
at org.neo4j.consistency.checking.full.PropertyAndNode2LabelIndexProcessor.process(PropertyAndNode2LabelIndexProcessor.java:39)
at org.neo4j.consistency.checking.full.RecordCheckWorker.run(RecordCheckWorker.java:77)
at org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.Workers$Worker.run(Workers.java:137)
Exception in thread "ParallelRecordScanner-Stage8_PS_Props-21" java.lang.OutOfMemoryError: GC overhead limit exceeded
at org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnumFrame.<init>(OrdsSegmentTermsEnumFrame.java:52)
at org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.<init>(OrdsSegmentTermsEnum.java:84)
at org.apache.lucene.codecs.blocktreeords.OrdsFieldReader.iterator(OrdsFieldReader.java:141)
at org.apache.lucene.index.TermContext.build(TermContext.java:93)
at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:192)
at org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
at org.apache.lucene.search.BooleanWeight.<init>(BooleanWeight.java:57)
at org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:239)
at org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
at org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:887)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:535)
at org.neo4j.kernel.api.impl.schema.reader.SimpleIndexReader.countIndexedNodes(SimpleIndexReader.java:136)
at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.verifyNodeCorrectlyIndexed(PropertyAndNodeIndexedCheck.java:171)
at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.checkIndexToLabels(PropertyAndNodeIndexedCheck.java:113)
at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.check(PropertyAndNodeIndexedCheck.java:71)
at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.check(PropertyAndNodeIndexedCheck.java:48)
at org.neo4j.consistency.report.ConsistencyReporter.dispatch(ConsistencyReporter.java:124)
at org.neo4j.consistency.report.ConsistencyReporter.forNode(ConsistencyReporter.java:440)
at org.neo4j.consistency.checking.full.PropertyAndNode2LabelIndexProcessor.process(PropertyAndNode2LabelIndexProcessor.java:63)
at org.neo4j.consistency.checking.full.PropertyAndNode2LabelIndexProcessor.process(PropertyAndNode2LabelIndexProcessor.java:39)
at org.neo4j.consistency.checking.full.RecordCheckWorker.run(RecordCheckWorker.java:77)
at org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.Workers$Worker.run(Workers.java:137)
Exception in thread "ParallelRecordScanner-Stage8_PS_Props-8" java.lang.OutOfMemoryError: GC overhead limit exceeded
at org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.getFrame(OrdsSegmentTermsEnum.java:128)
at org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.pushFrame(OrdsSegmentTermsEnum.java:158)
at org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.seekExact(OrdsSegmentTermsEnum.java:391)
at org.apache.lucene.index.TermContext.build(TermContext.java:94)
at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:192)
at org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
at org.apache.lucene.search.ConstantScoreQuery.createWeight(ConstantScoreQuery.java:119)
at org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
at org.apache.lucene.search.BooleanWeight.<init>(BooleanWeight.java:57)
at org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:239)
at org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
at org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:887)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:535)
at org.neo4j.kernel.api.impl.schema.reader.SimpleIndexReader.countIndexedNodes(SimpleIndexReader.java:136)
at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.verifyNodeCorrectlyIndexed(PropertyAndNodeIndexedCheck.java:171)
at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.checkIndexToLabels(PropertyAndNodeIndexedCheck.java:113)
at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.check(PropertyAndNodeIndexedCheck.java:71)
at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.check(PropertyAndNodeIndexedCheck.java:48)
at org.neo4j.consistency.report.ConsistencyReporter.dispatch(ConsistencyReporter.java:124)
at org.neo4j.consistency.report.ConsistencyReporter.forNode(ConsistencyReporter.java:440)
at org.neo4j.consistency.checking.full.PropertyAndNode2LabelIndexProcessor.process(PropertyAndNode2LabelIndexProcessor.java:63)
at org.neo4j.consistency.checking.full.PropertyAndNode2LabelIndexProcessor.process(PropertyAndNode2LabelIndexProcessor.java:39)
at org.neo4j.consistency.checking.full.RecordCheckWorker.run(RecordCheckWorker.java:77)
at org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.Workers$Worker.run(Workers.java:137)
Exception in thread "ParallelRecordScanner-Stage8_PS_Props-46" java.lang.OutOfMemoryError: GC overhead limit exceeded
at org.apache.lucene.codecs.blocktreeords.FSTOrdsOutputs.newOutput(FSTOrdsOutputs.java:225)
at org.apache.lucene.codecs.blocktreeords.FSTOrdsOutputs.add(FSTOrdsOutputs.java:162)
at org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.seekExact(OrdsSegmentTermsEnum.java:450)
at org.apache.lucene.index.TermContext.build(TermContext.java:94)
at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:192)
at org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
at org.apache.lucene.search.ConstantScoreQuery.createWeight(ConstantScoreQuery.java:119)
at org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
at org.apache.lucene.search.BooleanWeight.<init>(BooleanWeight.java:57)
at org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:239)
at org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
at org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:887)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:535)
at org.neo4j.kernel.api.impl.schema.reader.SimpleIndexReader.countIndexedNodes(SimpleIndexReader.java:136)
at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.verifyNodeCorrectlyIndexed(PropertyAndNodeIndexedCheck.java:171)
at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.checkIndexToLabels(PropertyAndNodeIndexedCheck.java:113)
at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.check(PropertyAndNodeIndexedCheck.java:71)
at org.neo4j.consistency.checking.full.PropertyAndNodeIndexedCheck.check(PropertyAndNodeIndexedCheck.java:48)
at org.neo4j.consistency.report.ConsistencyReporter.dispatch(ConsistencyReporter.java:124)
at org.neo4j.consistency.report.ConsistencyReporter.forNode(ConsistencyReporter.java:440)
at org.neo4j.consistency.checking.full.PropertyAndNode2LabelIndexProcessor.process(PropertyAndNode2LabelIndexProcessor.java:63)
at org.neo4j.consistency.checking.full.PropertyAndNode2LabelIndexProcessor.process(PropertyAndNode2LabelIndexProcessor.java:39)
at org.neo4j.consistency.checking.full.RecordCheckWorker.run(RecordCheckWorker.java:77)
at org.neo4j.unsafe.impl.batchimport.cache.idmapping.string.Workers$Worker.run(Workers.java:137)
Exception in thread "ParallelRecordScanner-Stage8_PS_Props-22" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "ParallelRecordScanner-Stage8_PS_Props-10" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "ParallelRecordScanner-Stage8_PS_Props-40" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "ParallelRecordScanner-Stage8_PS_Props-58" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "ParallelRecordScanner-Stage8_PS_Props-61" java.lang.OutOfMemoryError: GC overhead limit exceeded




Exception in thread "ParallelRecordScanner-Stage8_PS_Props-18" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "ParallelRecordScanner-Stage8_PS_Props-25" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "ParallelRecordScanner-Stage8_PS_Props-45" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "ParallelRecordScanner-Stage8_PS_Props-28" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "ParallelRecordScanner-Stage8_PS_Props-50" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "ParallelRecordScanner-Stage8_PS_Props-39" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "ParallelRecordScanner-Stage8_PS_Props-51" java.lang.OutOfMemoryError: GC overhead limit exceeded

Michael Hunger

unread,
Feb 26, 2017, 9:47:26 PM2/26/17
to ne...@googlegroups.com, Mattias Persson
How did you call the consistency checker?

How much heap did you provide for it?

Cheers, Michael



--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email to neo4j+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

unreal...@googlemail.com

unread,
Feb 27, 2017, 2:27:33 AM2/27/17
to Neo4j, mat...@neotechnology.com
Michael,

neo4j-admin check-consistency --database=test.db --verbose

dbms.memory.heap.initial_size=120000m
dbms.memory.heap.max_size=120000m

Wayne.
To unsubscribe from this group and stop receiving emails from it, send an email to neo4j+un...@googlegroups.com.

unreal...@googlemail.com

unread,
Feb 27, 2017, 2:32:51 AM2/27/17
to Neo4j, mat...@neotechnology.com

I should have said, that the head sizes are the ones that I have set in neo4j.conf.

Will these be used by check-consistency or do I need to supply them elsewhere ?

Wayne.

Michael Hunger

unread,
Feb 27, 2017, 11:57:49 AM2/27/17
to ne...@googlegroups.com
Do you have really that much RAM in your machine ? 120G usually doesn't make sense. Most people run with 32G as large heap.

That said. I asked and currently the numbers from the config are not used, you have to do:

export JAVA_OPTS=-Xmx24G -Xms24G
neo4j-admin ...


To unsubscribe from this group and stop receiving emails from it, send an email to neo4j+unsubscribe@googlegroups.com.

unreal...@googlemail.com

unread,
Feb 27, 2017, 2:47:47 PM2/27/17
to Neo4j
I have 1TB so usual consider 120GB to be small; but then I'm not a Java person.
I will set the java_opts as per below and see what happens....

Wayne.

Michael Hunger

unread,
Feb 27, 2017, 3:50:17 PM2/27/17
to ne...@googlegroups.com
Also if you use Neo4j Enterprise with a contract you can also raise support issues with Zendesk :)

To unsubscribe from this group and stop receiving emails from it, send an email to neo4j+unsubscribe@googlegroups.com.

unreal...@googlemail.com

unread,
Feb 28, 2017, 12:50:59 PM2/28/17
to Neo4j
Michael,

After running the check_consistency command for 1 day with the above parameters, it failed in exactly the same manner.

$env | grep -i java
JAVA_OPTS=-Xmx32G -Xms32G

Any other ideas ?

Wayne


On Monday, 27 February 2017 16:57:49 UTC, Michael Hunger wrote:

Michael Hunger

unread,
Feb 28, 2017, 8:52:47 PM2/28/17
to ne...@googlegroups.com
Sorry I just learned that neo4j-admin uses a different variable 

"You can pass memory options to the JVM via the `JAVA_MEMORY_OPTS` variable as a workaround though."



Von meinem iPhone gesendet

unreal...@googlemail.com

unread,
Mar 2, 2017, 1:07:31 PM3/2/17
to Neo4j
It appears not:

$env
JAVA_MEMORY_OPTS=-Xmx32G -Xms32G

.
.
.


....................  90%
2017-03-01 23:24:55.705+0000 INFO  [o.n.c.ConsistencyCheckService] === Stage7_RS_Backward ===
2017-03-01 23:24:55.706+0000 INFO  [o.n.c.ConsistencyCheckService] I/Os
RelationshipStore
  Reads: 3373036269
  Random Reads: 2732592348
  ScatterIndex: 81

2017-03-01 23:24:55.707+0000 INFO  [o.n.c.ConsistencyCheckService] Counts:
  10338061780 skipCheck
  1697668359 missCheck
  5621138678 checked
  10338061780 correctSkipCheck
  1688855306 skipBackup
  3951022794 overwrite
  2191262 noCacheSkip
  239346600 activeCache
  119509522 clearCache
  2429587416 relSourcePrevCheck
  995786837 relSourceNextCheck
  2058354842 relTargetPrevCheck
  137409583 relTargetNextCheck
  6917470274 forwardLinks
  7991190672 backLinks
  1052730774 nullLinks
2017-03-01 23:24:55.708+0000 INFO  [o.n.c.ConsistencyCheckService] Memory[used:404.70 MB, free:1.63 GB, total:2.03 GB, max:26.67 GB]
2017-03-01 23:24:55.708+0000 INFO  [o.n.c.ConsistencyCheckService] Done in  1h 37m 39s 828ms
.........2017-03-01 23:45:36.032+0000 INFO  [o.n.c.ConsistencyCheckService] === RelationshipGroupStore-RelGrp ===
2017-03-01 23:45:36.032+0000 INFO  [o.n.c.ConsistencyCheckService] I/Os
RelationshipGroupStore
  Reads: 410800979
  Random Reads: 102164662
  ScatterIndex: 24
NodeStore
  Reads: 229862945
  Random Reads: 226895703
  ScatterIndex: 98
RelationshipStore
  Reads: 423304043
  Random Reads: 139746630
  ScatterIndex: 33

2017-03-01 23:45:36.032+0000 INFO  [o.n.c.ConsistencyCheckService] Counts:
2017-03-01 23:45:36.033+0000 INFO  [o.n.c.ConsistencyCheckService] Memory[used:661.75 MB, free:1.39 GB, total:2.03 GB, max:26.67 GB]
2017-03-01 23:45:36.034+0000 INFO  [o.n.c.ConsistencyCheckService] Done in  20m 40s 326ms
.Exception in thread "ParallelRecordScanner-Stage8_PS_Props-19" java.lang.OutOfMemoryError: GC overhead limit exceeded
at org.apache.lucene.util.BytesRef.<init>(BytesRef.java:73)
at org.apache.lucene.codecs.blocktreeords.FSTOrdsOutputs.read(FSTOrdsOutputs.java:181)
at org.apache.lucene.codecs.blocktreeords.FSTOrdsOutputs.read(FSTOrdsOutputs.java:32)
at org.apache.lucene.util.fst.Outputs.readFinalOutput(Outputs.java:77)
at org.apache.lucene.util.fst.FST.readNextRealArc(FST.java:1094)
at org.apache.lucene.util.fst.FST.findTargetArc(FST.java:1262)
at org.apache.lucene.util.fst.FST.findTargetArc(FST.java:1186)
at org.apache.lucene.codecs.blocktreeords.OrdsSegmentTermsEnum.seekExact(OrdsSegmentTermsEnum.java:405)
at org.apache.lucene.index.TermContext.build(TermContext.java:94)
at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:192)
at org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
at org.apache.lucene.search.ConstantScoreQuery.createWeight(ConstantScoreQuery.java:119)
at org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:904)
at org.apache.lucene.search.BooleanWeight.<init>(BooleanWeight.java:57)


I have also tried larger memory values.

Wayne.

Mattias Persson

unread,
Mar 3, 2017, 2:51:48 AM3/3/17
to Neo4j
Querying Lucene, at the very least the way consistency checker uses it, has a drawback that all matching documents will be read and kept in heap before going through them.

So let me ask you something about your data: are there certain property values that are very common and also indexed?

unreal...@googlemail.com

unread,
Mar 3, 2017, 2:49:51 PM3/3/17
to Neo4j
Yes 

Also, in the 90% scan, what every Java memory parameter I use,   htop shows the same memory foot print.   Its as if the heap isn't being set as per the env parameters that you area asking me to set.

Wayne

Michael Hunger

unread,
Mar 3, 2017, 7:03:26 PM3/3/17
to ne...@googlegroups.com
Can you try to edit the script directly and add the memory parameters there?

To unsubscribe from this group and stop receiving emails from it, send an email to neo4j+unsubscribe@googlegroups.com.

unreal...@googlemail.com

unread,
Mar 4, 2017, 4:16:43 AM3/4/17
to Neo4j
So I have added:
$ cat neo4j-admin
#!/usr/bin/env bash
# Copyright (c) 2016 "Neo Technology,"
# Network Engine for Objects in Lund AB [http://neotechnology.com]
#
# This file is part of Neo4j.
#
# Neo4j is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
set -o errexit -o nounset -o pipefail
[[ "${TRACE:-}" ]] && set -o xtrace

: "${NEO4J_BIN:=$(dirname "$0")}"
readonly NEO4J_BIN
. "${NEO4J_BIN}/neo4j-shared.sh"

main() {
  setup_environment
  check_java
  build_classpath
  export NEO4J_HOME NEO4J_CONF
  exec "${JAVA_CMD}" -Xmx124G -Xms124G -cp "${CLASSPATH}" -Dfile.encoding=UTF-8 "org.neo4j.commandline.admin.AdminTool" "$@"
}

main "$@"

I'll let you know in 24 hours.....

Wayne

unreal...@googlemail.com

unread,
Mar 5, 2017, 3:04:36 AM3/5/17
to Neo4j
Looks like this fixed the problem - for the Heap size.
Setting Env variables doesn't appear to work - will you be coding this in to the script ?

In my case the 90% phase comprises a busy machine,  with little IO activity.   The data appears to have been loaded into the memory and the consistency check is running.  Presumably this will run for many days ?

....................  90%
2017-03-04 15:14:41.886+0000 INFO  [o.n.c.ConsistencyCheckService] === Stage7_RS_Backward ===
2017-03-04 15:14:41.887+0000 INFO  [o.n.c.ConsistencyCheckService] I/Os
RelationshipStore
  Reads: 3358829271
  Random Reads: 2730096948
  ScatterIndex: 81

2017-03-04 15:14:41.888+0000 INFO  [o.n.c.ConsistencyCheckService] Counts:
  10338220915 skipCheck
  1697668358 missCheck
  5621138677 checked
  10338220915 correctSkipCheck
  1688855306 skipBackup
  3951022794 overwrite
  2032128 noCacheSkip
  239346600 activeCache
  119509522 clearCache
  2429587415 relSourcePrevCheck
  995786837 relSourceNextCheck
  2058354842 relTargetPrevCheck
  137409583 relTargetNextCheck
  6917470274 forwardLinks
  7991190672 backLinks
  1052730774 nullLinks
2017-03-04 15:14:41.888+0000 INFO  [o.n.c.ConsistencyCheckService] Memory[used:33.59 GB, free:90.41 GB, total:124.00 GB, max:124.00 GB]
2017-03-04 15:14:41.888+0000 INFO  [o.n.c.ConsistencyCheckService] Done in  1h 40m 354ms
.........2017-03-04 15:37:20.050+0000 INFO  [o.n.c.ConsistencyCheckService] === RelationshipGroupStore-RelGrp ===
2017-03-04 15:37:20.051+0000 INFO  [o.n.c.ConsistencyCheckService] I/Os
RelationshipGroupStore
  Reads: 411311642
  Random Reads: 71933550
  ScatterIndex: 17
NodeStore
  Reads: 208717760
  Random Reads: 205603260
  ScatterIndex: 98
RelationshipStore
  Reads: 419830207
  Random Reads: 112104577
  ScatterIndex: 26

2017-03-04 15:37:20.051+0000 INFO  [o.n.c.ConsistencyCheckService] Counts:
2017-03-04 15:37:20.052+0000 INFO  [o.n.c.ConsistencyCheckService] Memory[used:6.03 GB, free:117.97 GB, total:124.00 GB, max:124.00 GB]
2017-03-04 15:37:20.052+0000 INFO  [o.n.c.ConsistencyCheckService] Done in  22m 38s 163ms

unreal...@googlemail.com

unread,
Mar 6, 2017, 7:35:51 AM3/6/17
to Neo4j
As an update - after running for 48 hours the GC again runs out of heap. I can increase the heap size again (above 124GB)  but getting a bit tedious now.  
There are no indicators that the DB has a problem - I was just interested to ensure that its integrity was good.

Let me know if you wish me to try anything else....

Wayne

Mattias Persson

unread,
Mar 7, 2017, 8:37:36 AM3/7/17
to Neo4j Development
Could you attach something simple as VisualVM, https://visualvm.java.net/ to the process when it runs CC and sample on memory to get an idea of what it is that takes up so much memory?

--
You received this message because you are subscribed to a topic in the Google Groups "Neo4j" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/neo4j/-BMXSwymS90/unsubscribe.
To unsubscribe from this group and all its topics, send an email to neo4j+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Mattias Persson
Neo4j Hacker at Neo Technology
Reply all
Reply to author
Forward
0 new messages