Hi all,
I am preparing the upgrade from our DSpace 1.8 to DSpace 5.3 and am running in many unpredictable memory issues. It is not clear when these happen, or why, but when it happens, the only way to fix it is to shut down Tomcat, wait a little and start Tomcat. Increasing heap size allocation to Catalina and java only seems to delay when the out of memory problems occur. One type of problems reports a Map failed error. This error refers to a blogpost that recommends not to allocate more that the really needed heapsize.
Any advice on still allocating sufficient memory to Tomcat so that the processes run properly but not run into these MMAP errors?
Our test server is a new 64bit Linux server, Sun java 8, Tomcat 8.0.24, DSpace 5.3, Oracle11g. Our repository is small, about 5000 items.
The problems occur both when I do a fresh install or use an upgrade of the database with a clean dspace 5.3 code base. We have not used SOLR before, so any SOLR indexes are created new.
Importing OAI, importing converted logs for statistics, creating discovery indexes most of the time fail at some point. Part of the indexes are written, but then it stops. This is on a test server that is not being used by others. When I run the scripts, no other scripts are being run. Every script is being run as the dspace user.
When I performed an upgrade of our 1.8 database, the browse and search indexes were not created. So, I manually ran [dspace]/bin dspace index-discovery as the dspace user. This starts fine, indexes are being written, but it then stops because of Map failed errors.
2015-09-08 17:09:31,867 ERROR org.dspace.discovery.SolrServiceImpl @ Error while writing item to discovery index: 1820/1376 message:Map failed: MMapIndexInput(path="/dspace/dspace183/solr/search/data/index/_n6.fdt") [this may be caused by lack of enough unfragmented virtual address space or too restrictive virtual memory limits enforced by the operating system, preventing us to map a chunk of 35127671 bytes. Please review 'ulimit -v', 'ulimit -m' (both should return 'unlimited'), and 'sysctl vm.max_map_count'. More information:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Map failed: MMapIndexInput(path="/dspace/dspace183/solr/search/data/index/_n6.fdt") [this may be caused by lack of enough unfragmented virtual address space or too restrictive virtual memory limits enforced by the operating system, preventing us to map a chunk of 35127671 bytes. Please review 'ulimit -v', 'ulimit -m' (both should return 'unlimited'), and 'sysctl vm.max_map_count'. More information:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]
at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:552)
at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at org.dspace.discovery.SolrServiceImpl.writeDocument(SolrServiceImpl.java:738)
at org.dspace.discovery.SolrServiceImpl.buildDocument(SolrServiceImpl.java:1419)
at org.dspace.discovery.SolrServiceImpl.indexContent(SolrServiceImpl.java:225)
at org.dspace.discovery.SolrServiceImpl.updateIndex(SolrServiceImpl.java:405)
at org.dspace.discovery.IndexClient.main(IndexClient.java:127)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.dspace.app.launcher.ScriptLauncher.runOneCommand(ScriptLauncher.java:226)
at org.dspace.app.launcher.ScriptLauncher.main(ScriptLauncher.java:78)
When importing OAI items, it also fails with a MMAP error
2015-09-02 16:03:39,299 INFO org.apache.solr.update.processor.LogUpdateProcessor @ [oai] webapp=/solr path=/update params={wt=javabin&version=2} {add=[1820/3352 (1511210477730922496)]} 0 3
2015-09-02 16:03:39,398 ERROR org.apache.solr.update.CommitTracker @ auto commit error...:java.io.IOException: Map failed: MMapIndexInput(path="/dspace/dspace183/solr/oai/data/index/_1z_Lucene41_0.tim") [this may be caused by lack of enough unfragmented virtual address space or too restrictive virtual memory limits enforced by the operating system, preventing us to map a chunk of 164256 bytes. Please review 'ulimit -v', 'ulimit -m' (both should return 'unlimited'), and 'sysctl vm.max_map_count'. More information:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:907)
at org.apache.lucene.store.MMapDirectory.map(MMapDirectory.java:224)
at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:199)
at org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:198)
at org.apache.lucene.codecs.blocktree.BlockTreeTermsReader.<init>(BlockTreeTermsReader.java:106)
at org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:441)
at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.<init>(PerFieldPostingsFormat.java:197)
at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:254)
at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:120)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:108)
at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:144)
at org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:282)
at org.apache.lucene.index.IndexWriter.applyAllDeletesAndUpdates(IndexWriter.java:3271)
at org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:3262)
at org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2952)
at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3097)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3064)
at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:582)
at org.apache.solr.update.CommitTracker.run(CommitTracker.java:216)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
importing of statistics from converted logs most of the time does not complete. Part of the statistics are written to the SOLR index and then is suddenly stops. For example with this IOError as reported in console. The log file sometimes also report an out of memory.
Processed 4733 log lines
- 135 entries added to solr: 2.852%
- 4598 errors: 97.148%
- 0 search engine activity skipped: 0%
About to commit data to solr...Error committing statistics to solr server!
at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:566)
at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:168)
at org.apache.solr.client.solrj.SolrServer.commit(SolrServer.java:146)
at org.dspace.statistics.util.StatisticsImporter.load(StatisticsImporter.java:371)
at org.dspace.statistics.util.StatisticsImporter.main(StatisticsImporter.java:486)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.dspace.app.launcher.ScriptLauncher.runOneCommand(ScriptLauncher.java:226)
at org.dspace.app.launcher.ScriptLauncher.main(ScriptLauncher.java:78)
Caused by: org.apache.http.NoHttpResponseException: localhost:8080 failed to respond
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:143)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:448)
... 13 more
best wishes,
Francis Brouns