Re: heavy heap usage and garbage collection issues with the mongo java driver

1,912 views
Skip to first unread message

Bryan

unread,
Jul 6, 2012, 7:35:46 PM7/6/12
to mongod...@googlegroups.com

How many reads per second are you able achieve? It may be possible that you're hitting the current open file descriptor limit. Are there any exceptions or errors being logged? You can also run JConsole on ycsb to determine size and growth of your eden, survivor, tenured space and how often the garbage collector runs on these spaces.


On Friday, July 6, 2012 8:12:57 AM UTC-7, ker can wrote:
Hi,

I'm running some throughput tests with mongo. I used yahoo's cloud benchmark tool (ycsb) to drive a 100% read-only workload.
A single mongod is running on a system with 32 cores/190 GB of memory - the working set about 10 million documents are all in memory.
Each document is about 1k bytes.

The load generator, the ycsb client is running on a separate server also with 32 cores/190 gb of memory.
I'm running the client with 20 threads for 5 minutes and I see some heavy heap usage and garbage collection going on.

(using java 1.7.0_03)

So for example, if I set heap to 150GB with these options:
-Xms150G -Xmx150G -XX:+PrintGCTimeStamps -verbosegc

starttime, 07:14:38
34.443: [GC 39321600K->12117138K(150732800K), 19.7706060 secs]
101.010: [GC 51438738K->24023043K(150732800K), 26.4009280 secs]
190.959: [GC 63344643K->42406579K(150732800K), 34.5489460 secs]
293.656: [GC 81728179K->61900347K(150732800K), 41.8107320 secs]
endtime, 07:19:38

So 2 minutes in a 5 minute run is spent garbage collecting impacting the  throughput.
[ I used the same ycsb client against a mysql database using a jdbc driver and the heap usage is very low using the same default jvm options and very little GC activity ]


If I use the G1 GC, results are not very different (-Xms150G -Xmx150G -XX:+PrintGCTimeStamps -verbosegc -XX:+UseG1GC)

starttime, 07:24:25
106.665: [GC pause (young) 51200M->35976M(153600M), 76.4086490 secs]
278.610: [GC pause (young) 80776M->61051M(153600M), 63.5580800 secs]
endtime, 07:29:25


The read implementation is quite simple:

    public int read(String key)
    {

        try {
            DBObject q = new BasicDBObject().append("_id", key);
            DBObject queryResult = null;
            int retVal;


            queryResult = collection.findOne(q);
           
            retVal = queryResult != null ? 0 : 1;

            q = null;
            queryResult = null;

            return retVal;
            
        } catch (Exception e) {
            System.err.println(e.toString());
            return 1;
        }
    }


Why is there so much heap consumption ? Are there tuning guidelines when using the mongo java driver ?
I see one report of a performance regression because of increased memory use which is supposed to be fixed in 2.8 which is what I'm using.
https://jira.mongodb.org/browse/JAVA-505

I saw some discussions around java driver performance here - not clear what the takeaway from it was.
http://groups.google.com/group/mongodb-user/browse_thread/thread/48a604703d9ffc61


32 cores/190 GB of physical memory.
java version "1.7.0_03"
Java(TM) SE Runtime Environment (build 1.7.0_03-b04)
Java HotSpot(TM) 64-Bit Server VM (build 22.1-b02, mixed mode)

mongod version: mongodb-linux-x86_64-2.0.6
mongo java driver version: Version 2.8.0


Here's the collection stats and sample document:

> db.usertable.stats()
{
        "ns" : "ycsb.usertable",
        "count" : 10000000,
        "size" : 11559996120,
        "avgObjSize" : 1155.999612,
        "storageSize" : 12892401648,
        "numExtents" : 38,
        "nindexes" : 1,
        "lastExtentSize" : 2146426864,
        "paddingFactor" : 1,
        "flags" : 1,
        "totalIndexSize" : 561143408,
        "indexSizes" : {
                "_id_" : 561143408
        },
        "ok" : 1
}
>


> db.usertable.find()
{ "_id" : "user0", "field5" : BinData(0,"NDsmISo0Nj8oOT08JTw8KyIwNzotMyoqPCkyIzMiKys0Lz0uOiAoLzMlNS09MSosKCkiIDQ6KywqJS8vJC8oKjw0OT87PCAvKygiJzI4KCw6PjYpIj08NSYqICUyNSsxOi0iMw=="), "field4" : BinData(0,"NDonLjM0Ny0kISYhPy4mMzs/JjEgNyctPygkLz00JSspKSQgJTYkIS8sLD03OSQ7PSgmMz8uOjkrKCo4PDQwITwgKz09JyM2LCw+NT86KSY5Kjo6NTQtLT47NzI/Lzg8NSc7JQ=="), "field3" : BinData(0,"PSwtLCkyIy0zODglLDYlLj48MTU9MTkzKjsqMzotJjU5MjQmJiM0KTItLj0tMzMjJiwmPTI3LCYzPS8jLyopJTEzMzsyOz02NjwzOjMkNDEhMjQuODIzMiMvISs8JSIkLDwwIg=="), "field2" : BinData(0,"MyY6PS8xLCAgJis8OzEhMDAqMDY5PTk9OywmNDA4Iig5Jyw+Jy8kISMwPTUmKiAhOS42KzI0Mys6PiwzNSs7OTo1LiI+OT85PjYxJT89KyArPDEwICsxIDEwODg3Izw8IT87KA=="), "field9" : BinData(0,"IyAqIy0uNCA/OyY/Ozs/LTk7IzE5Lz8vLyM7Pzk2MCUsODYyNCA1PSYpNDoqISM7NCY9JSwsLDA7IjMgNTMvODA4NTEoNzQ9PTc9OCI1PS8xPzw3ODguODw5Pis5Oyw0LjAuMA=="), "field8" : BinData(0,"NyY3PzImMCQ4JCwuISgqPyAqOTYyKCo2MykqJSo+IDAnIzwqLigiIyAjMCsoLzczMT86MTY5MSY7Ozg+OSA7OS40Oj8iKic7ICQpNz42Pyw6NSA4OjQ2LCo9KjY8Ii48PCohIg=="), "field7" : BinData(0,"MyMiLSApNCk8JTszLS0yIywnOCYrOzwtNyE2PDc7OSg0ICUsIzQwLDYtPTApNDwmJisiIiMzLDUsITYmMCglOz0qMjM9IS8gIiArKjMqJzM/JTMrMyA8KCsmPTUuMysuKyY2Mg=="), "field6" : BinData(0,"Pj8jMjM+ODw5JCE2KSomNCQ3LS0uIjIhJD4hOSMlKiQ9LSgyKD4iOzwnOSspLDstLjIzKSw3Kyw2PC4lPCwxIj82ODYwMi0wNTIrLjYgKCggPz8jMTIuIzQnOzM5IC8uICcqKA=="), "field1" : BinData(0,"JyQqIjkuLiMiNy8gNyQoLCoxIj05JigqIyo7Kzs0LSMzMzsmMCwrMTY5IC08IyUhOS8jKDs8JiAoMCg4NC8vNSE6NjAhOS8rJDA9LT0sMy0yPScxKi0yKC8oOywwNCMtJy05Kw=="), "field0" : BinData(0,"MCM4Jyc/KjM4ISwjIyw8MzUhKT0yOC49Iz45KDsuLysuOCg8JCwuLTwmJyEpKyspOz8pKSYvKyMzKSomOzw7PTchJSU8LzI0PDUxLiUsOy07Lz4+MDE0JDwoJSsnKjw5Oj04PQ==") }

Thanks Much !

ker can

unread,
Jul 11, 2012, 10:53:06 AM7/11/12
to mongod...@googlegroups.com
About 100k reads a second before GC kicks. I don't think the open file descriptor limit has been hit.  I've attached a screenshot of the heap usage increasing from jconsole.  It reports about 2 minutes has been spent in GC. At the end of a 5 minute run, it shows the heap usage close to 100Gb.  There are no exceptions or errors logged.
jconsole.jpg

Bryan

unread,
Jul 12, 2012, 12:03:29 AM7/12/12
to mongod...@googlegroups.com

100K reads is pretty fast. It would be interesting to see how this rate compares to the MySQL database. The amount of work the garbage collector has to do is going to be (roughly) proportional the query rate. Higher loads will mean the GC has more objects to clean up per unit of time. Also, the time to complete a garbage collection cycle will grow proportionally with the size of the heap. So, although you want the heap to as large as needed, make sure -Xmx is not too large. If you stop your load test, while monitoring on jConsole, does the garbage collector catch up and recover heap?

ker can

unread,
Jul 14, 2012, 6:58:43 AM7/14/12
to mongod...@googlegroups.com

It makes sense that the GC has more work at higher throughput rates.  So comparing with mysql might not be fair in this case because the throughputs were quite different. However if i pick couchbase client, it has throughput rates in the same ballpark -- but very little time being spent in GC work and nowhere close to the rate at which heap is being used.

here are some jmap histograms snapshots of the top 30 after a two or three  GC's were done from the time snapshot 1 was taken.  (now this was with just one thread - otherwise jmap would just take forever to walk the heap).  From the read code i have i would have expected everything to be garbage collected without things having to go into tenured space unless the mongo driver itself was holding references to things.

Snapshot 1:

1:              2749188 406755088       char[]
2:              2749393 87980576        java.lang.String
3:              1498264 59930560        java.util.LinkedHashMap$Entry
4:              124960  9997376 java.util.HashMap$Entry[]
5:              124853  8989416 com.mongodb.DBApiLayer$Result
6:              125432  7013376 java.lang.Object[]
7:              124854  6991824 com.mongodb.BasicDBObject
8:              124854  6991824 com.mongodb.Response
9:              214223  6855136 java.util.LinkedList
10:             125070  5002800 java.lang.ref.Finalizer
11:             24466   4194584 int[]
12:             169538  4068912 java.util.LinkedList$Node
13:             124853  3995296 java.util.LinkedList$ListItr
14:             46035   3080824 byte[]
15:             124902  2997648 java.util.ArrayList
16:             44685   1787400 com.mongodb.OutMessage
17:             44684   1787360 com.mongodb.DefaultDBCallback
18:             11458   1565568 * MethodKlass
19:             11458   1440664 * ConstMethodKlass
20:             44684   1429888 org.bson.BasicBSONDecoder$BSONInput
21:             44684   1429888 java.util.LinkedHashMap$EntryIterator
22:             44689   1072536 java.util.concurrent.ConcurrentLinkedQueue$Node
23:             44684   1072416 com.mongodb.Response$MyInputStream
24:             900     968048  * ConstantPoolKlass
25:             44685   714960  com.mongodb.DefaultDBEncoder
26:             900     647696  * InstanceKlassKlass
27:             828     621920  * ConstantPoolCacheKlass
28:             325     126800  * MethodDataKlass
29:             1015    122240  java.lang.Class
30:             1543    86312   * System ObjArray

Snapshot 2, a few seconds later:

1:              5989282 886289256       char[]
2:              5989487 191663584       java.lang.String
3:              3265588 130623520       java.util.LinkedHashMap$Entry
4:              272237  21779536        java.util.HashMap$Entry[]
5:              272127  19593144        com.mongodb.DBApiLayer$Result
6:              272709  15260888        java.lang.Object[]
7:              272131  15239336        com.mongodb.BasicDBObject
8:              272128  15239168        com.mongodb.Response
9:              295094  9443008 java.util.LinkedList
10:             272130  8708160 java.util.LinkedList$ListItr
11:             283612  6806688 java.util.LinkedList$Node
12:             272179  6532296 java.util.ArrayList
13:             42816   6075432 int[]
14:             81960   3278400 java.lang.ref.Finalizer
15:             11458   1565568 * MethodKlass
16:             11458   1440664 * ConstMethodKlass
17:             12832   1221456 byte[]
18:             900     968048  * ConstantPoolKlass
19:             900     647696  * InstanceKlassKlass
20:             828     621920  * ConstantPoolCacheKlass
21:             11482   459280  com.mongodb.OutMessage
22:             11481   459240  com.mongodb.DefaultDBCallback
23:             11481   367392  org.bson.BasicBSONDecoder$BSONInput
24:             11481   367392  java.util.LinkedHashMap$EntryIterator
25:             11485   275640  java.util.concurrent.ConcurrentLinkedQueue$Node
26:             11481   275544  com.mongodb.Response$MyInputStream
27:             11482   183712  com.mongodb.DefaultDBEncoder
28:             325     126800  * MethodDataKlass
29:             1015    122240  java.lang.Class
30:             1543    86312   * System ObjArray

Jeff Yemin

unread,
Jul 14, 2012, 12:24:46 PM7/14/12
to mongod...@googlegroups.com
Are you using -histo or -histo:live when you invoke jmap?  What are your GC settings for this run?  

ker can

unread,
Jul 15, 2012, 12:20:09 PM7/15/12
to mongod...@googlegroups.com
The one above was just with the -histo option:  Here's the one with the -histo:live -  just went with the default settings for 1.7.0_03

 num     #instances         #bytes  class name
----------------------------------------------
   1:      11524637     1705520624  [C
   2:      11524842      368794944  java.lang.String
   3:       6284872      251394880  java.util.LinkedHashMap$Entry
   4:        523844       41908096  [Ljava.util.HashMap$Entry;
   5:        523737       37709064  com.mongodb.DBApiLayer$Result
   6:        524316       29350880  [Ljava.lang.Object;
   7:        523738       29329328  com.mongodb.BasicDBObject
   8:        523738       29329328  com.mongodb.Response
   9:        523748       20949920  java.lang.ref.Finalizer
  10:        523739       16759648  java.util.LinkedList
  11:        523737       16759584  java.util.LinkedList$ListItr
  12:        523786       12570864  java.util.ArrayList
  13:        523738       12569712  java.util.LinkedList$Node
  14:         11450        1564480  <methodKlass>
  15:         11450        1349032  <constMethodKlass>
  16:           899         949352  <constantPoolKlass>
  17:           899         654256  <instanceKlassKlass>
  18:           827         621568  <constantPoolCacheKlass>
  19:          1051         200856  [B
  20:           326         126952  <methodDataKlass>
  21:          1014         122112  java.lang.Class
  22:          1500          84600  [[I
  23:          1360          80880  [S
  24:            98          57232  <objArrayKlassKlass>
  25:           810          44432  [I
  26:           389          31120  java.lang.reflect.Method
  27:           192          13824  java.lang.reflect.Field
  28:           417          13344  java.util.concurrent.ConcurrentHashMap$HashEntry
  29:           276           8832  java.util.HashMap$Entry
  30:           290           8784  [Ljava.lang.String;

ker can

unread,
Jul 15, 2012, 5:00:19 PM7/15/12
to mongod...@googlegroups.com

Another one with the -histo:live


 num     #instances         #bytes  class name
----------------------------------------------
   1:      11203334     1345524224  [B
   2:      13442429      537697160  java.util.LinkedHashMap$Entry
   3:      13445400      432021000  [C
   4:      13445502      430256064  java.lang.String
   5:       1120495       89640240  [Ljava.util.HashMap$Entry;
   6:       1120196       80654112  com.mongodb.DBApiLayer$Result
   7:       1120921       62758736  [Ljava.lang.Object;
   8:       1120213       62731928  com.mongodb.BasicDBObject
   9:       1120211       62731816  com.mongodb.Response
  10:       1120271       44810840  java.lang.ref.Finalizer
  11:       1120213       35846816  java.util.LinkedList
  12:       1120196       35846272  java.util.LinkedList$ListItr
  13:       1120385       26889240  java.util.ArrayList
  14:       1120212       26885088  java.util.LinkedList$Node
  15:         12025        1643000  <methodKlass>
  16:         12025        1438000  <constMethodKlass>
  17:           963        1023456  <constantPoolKlass>
  18:           963         698096  <instanceKlassKlass>
  19:           879         661120  <constantPoolCacheKlass>
  20:           407         158672  <methodDataKlass>
  21:          1078         130096  java.lang.Class
  22:          1447          89280  [S
  23:          1576          88368  [[I
  24:            98          57232  <objArrayKlassKlass>
  25:           847          50432  [I
  26:           391          31280  java.lang.reflect.Method
  27:           570          18240  java.util.concurrent.ConcurrentHashMap$HashEntry
  28:           225          16200  java.lang.reflect.Field
  29:           262          12576  java.util.HashMap
  30:           722          11552  java.lang.Object

Jeff Yemin

unread,
Jul 18, 2012, 9:26:08 PM7/18/12
to mongod...@googlegroups.com
I'm not able to reproduce this.  Running on OSX, I'm seeing results like this:

Jmpb:tmp jeff$ jmap -histo:live 21373 | head -30

 num     #instances         #bytes  class name
----------------------------------------------
   1:         12406        1694848  <methodKlass>
   2:         12406        1591648  <constMethodKlass>
   3:          1016        1116328  <constantPoolKlass>
   4:          1016         726360  <instanceKlassKlass>
   5:           928         691136  <constantPoolCacheKlass>
   6:          3201         297600  [C
   7:          1251         226824  [B
   8:           482         183784  <methodDataKlass>
   9:          1138         137328  java.lang.Class
  10:          1900         127744  [I
  11:          3414         109248  java.lang.String
  12:          1548          92960  [S
  13:          1655          91608  [[I
  14:           105          60480  <objArrayKlassKlass>
  15:           847          37168  [Ljava.lang.Object;
  16:           391          31280  java.lang.reflect.Method
  17:           247          17784  com.mongodb.DBApiLayer$Result
  18:           541          17312  java.util.concurrent.ConcurrentHashMap$HashEntry
  19:           461          14752  java.util.HashMap$Entry
  20:           247          13832  com.mongodb.Response
  21:           192          13824  java.lang.reflect.Field
  22:           146          12832  [Ljava.util.HashMap$Entry;
  23:           354          11328  java.util.LinkedList
  24:           275          11000  java.lang.ref.Finalizer
  25:           329          10528  [Ljava.lang.String;
  26:           543           8688  java.lang.Object
  27:           247           7904  java.util.LinkedList$ListItr

I'm running the command:

./bin/ycsb run  mongodb -s -P workloads/workloada

but I bumped recordcount and operationcount to 1000000 so that the test runs for longer.

One thing I do see is that the GC is slower collecting instances of com.mongodb.DBApiLayer$Result, and it's because that class has overrides the finalize method.  

Can you provide a copy of the workload file that you're using, so that I can run against the same one?

Regards,
Jeff

ker can

unread,
Jul 20, 2012, 12:06:55 PM7/20/12
to mongod...@googlegroups.com
I'm using workloadC -  the 100% read-only test. That exhibits the problem much better since the throughput from the 100% read-only workload is much better than the 50-50 update/read workloadA.

Thanks

ker can

unread,
Jul 20, 2012, 12:11:16 PM7/20/12
to mongod...@googlegroups.com
Here are the exact parameters,  I set operation count very high and control the execution time with the maxexectiontime parameter. 

export CLASSPATH=$CLASSPATH:$YCSBROOT/core/lib/core-0.1.4.jar:$YCSBROOT/mongodb-binding/lib/mongodb-binding-0.1.4.jar

$JAVA_HOME/bin/java \
com.yahoo.ycsb.Client -db com.yahoo.ycsb.db.MongoDbClient \
-P $YCSBROOT/workloads/workloadc \
-p mongodb.url=mongodb://10.23.0.250 \
-p mongodb.writeConcern=safe \
-p mongodb.database=ycsb \
-p recordcount=2000000 \
-p operationcount=900000000 \
-p insertorder=ordered \
-p requestdistribution=uniform \
-p maxexecutiontime=720 \
-threads 16

ker can

unread,
Jul 20, 2012, 12:12:58 PM7/20/12
to mongod...@googlegroups.com
... and the recordcount is the actual count of the docs in the collection.  (sorry about the multiple post reply)

Jeff Yemin

unread,
Jul 20, 2012, 2:16:03 PM7/20/12
to mongod...@googlegroups.com
I think you're seeing much larger instance counts than I am because your server has so much more physical memory than mine, and Java 7 dynamically sets max memory based on the amount of memory you have.  I did a run with these explicit VM settings

-Xms128M -Xmx128M

and after running for 8 minutes total GC time was less around 13 seconds:

Time: 
2012-07-20 14:08:56
Used: 
    20,221 kbytes
Committed: 
   123,904 kbytes
Max: 
   123,904 kbytes
GC time: 
  0.479 seconds on PS MarkSweep (5 collections)
12.073 seconds on PS Scavenge (314 collections)

and heap usage never went above 70 Mb.  A sample histogram looks like this after 8 minutes:

Jmpb:YCSB jeff$ jmap -histo:live 24614 | head -40

 num     #instances         #bytes  class name
----------------------------------------------
   1:         16870        2303120  <methodKlass>
   2:         16870        2217032  <constMethodKlass>
   3:          1552        1686088  <constantPoolKlass>
   4:          7043        1578952  [C
   5:         28049        1121960  java.util.TreeMap$Entry
   6:          1552        1101728  <instanceKlassKlass>
   7:          1433        1051328  <constantPoolCacheKlass>
   8:          1995        1015400  [B
   9:         12242         881424  com.mongodb.DBApiLayer$Result
  10:         17100         825608  [Ljava.lang.Object;
  11:         12258         686448  com.mongodb.Response
  12:         12341         493640  java.lang.ref.Finalizer
  13:         17601         422424  java.lang.Long
  14:          8678         416544  java.util.TreeMap
  15:         12596         403072  java.util.LinkedList
  16:         12242         391744  java.util.LinkedList$ListItr
  17:         12382         297168  java.util.ArrayList
  18:           570         239672  <methodDataKlass>
  19:          7280         232960  java.lang.String
  20:          8646         207504  javax.management.openmbean.CompositeDataSupport
  21:          1713         206448  java.lang.Class
  22:          3192         205416  [I
  23:          4981         199240  java.util.LinkedHashMap$Entry
  24:          8655         138480  java.util.TreeMap$KeySet
  25:          2324         138040  [S
  26:          2471         127120  [[I
  27:          1359         111080  [Ljava.util.HashMap$Entry;
  28:          3955          94920  java.util.Collections$UnmodifiableRandomAccessList
  29:          3935          94440  java.util.Arrays$ArrayList
  30:           144          82944  <objArrayKlassKlass>
  31:           796          44576  java.util.LinkedHashMap
  32:           476          38080  java.lang.reflect.Method
  33:          1006          32192  java.util.HashMap$Entry
  34:          1127          29672  [Ljava.lang.String;
  35:           529          25392  java.util.HashMap
  36:           712          22784  java.util.concurrent.ConcurrentHashMap$HashEntry
  37:           880          21120  javax.management.ObjectName$Property

I don't see anything unexpected in these results.  It certainly may be the case that the MongoDB Java driver generates more garbage than drivers for other databases that you're testing, but in practice we have not seen this to cause big problems for users.

Jeff Yemin

unread,
Jul 20, 2012, 2:43:03 PM7/20/12
to mongod...@googlegroups.com
I also noted that CPU usage averaged 18.4% and never rose above 22% on a 4-core box. This demonstrates that the workload is not CPU bound even with 16 threads.

ker can

unread,
Jul 20, 2012, 5:28:07 PM7/20/12
to mongod...@googlegroups.com

Hm. Whats the throughput thats being reported ?  I see a throughput of approx 49k operations/reads per second.
Same options, -Xms128M -Xmx128M and 16 threads - jconsole said it spent 2 minutes in a 8 minute run on garbage collection (see screen shot attached).

thanks !
jconsole-screen1.jpg

Jeff Yemin

unread,
Jul 20, 2012, 6:04:07 PM7/20/12
to mongod...@googlegroups.com
[OVERALL], RunTime(ms), 734962.0
[OVERALL], Throughput(ops/sec), 11741.978496847456
[READ], Operations, 8629908
[READ], AverageLatency(us), 1329.129166035142
[READ], MinLatency(us), 108
[READ], MaxLatency(us), 332921
[READ], 95thPercentileLatency(ms), 2
[READ], 99thPercentileLatency(ms), 4

The difference seems to be the number of parallel scavenges being done.  In my run there were 314, while in yours there where 5042.  It doesn't seem to be having an affect on your overall throughput though.  See http://developers.sun.com/mobility/midp/articles/garbagecollection2/#2.2.1 for more information on the parallel scavenge collector.  Based on what I read there, in your run it's spending more time scavenging because you have more spare hardware capacity to devote to it.  I don't think this is a problem.  What latency measurements are being reported?

ker can

unread,
Jul 20, 2012, 7:13:12 PM7/20/12
to mongod...@googlegroups.com
I just noticed the time jconsole reported and the times reported by -verbosegc does not add up. When I add up all the seconds, reported by -verbosegc -XX:+PrintGCTimeStamps, it comes out to 122 seconds.  (i've attached the file).  Thats about 25% of the time.  Now thats roughly how much the throughput is being impacted also.  So for ex: if i were to increase the heap size, and run for a short while, the throughput would be around 65k. 

You mentioned the GC is slower collecting instances of com.mongodb.DBApiLayer$Result - I'm wondering if there were other issues like that contributing to this. The histo:live from my run looked a bit different from your run with the same settings.


 num     #instances         #bytes  class name
----------------------------------------------
   1:         41015        6047704  [B
   2:           854        3409184  [I
   3:         50449        3387216  [C
   4:         47668        1906720  java.util.LinkedHashMap$Entry
   5:         12022        1642592  <methodKlass>
   6:         50551        1617632  java.lang.String
   7:         12022        1437712  <constMethodKlass>
   8:           962        1023080  <constantPoolKlass>
   9:           962         697344  <instanceKlassKlass>
  10:           878         660928  <constantPoolCacheKlass>
  11:          4261         341520  [Ljava.util.HashMap$Entry;
  12:          3966         285552  com.mongodb.DBApiLayer$Result
  13:          4690         249800  [Ljava.lang.Object;
  14:          3983         223048  com.mongodb.BasicDBObject
  15:          3980         222880  com.mongodb.Response
  16:          4040         161600  java.lang.ref.Finalizer
  17:           388         154008  <methodDataKlass>
  18:          1077         129976  java.lang.Class
  19:          3983         127456  java.util.LinkedList
  20:          3965         126880  java.util.LinkedList$ListItr

thanks

client1.result.txt

Jeff Yemin

unread,
Jul 25, 2012, 6:50:42 PM7/25/12
to mongod...@googlegroups.com
It's only useful if you don't always call DBCursor.close().  If you do, it's a no-op.

Tracking ability to control this better under https://jira.mongodb.org/browse/JAVA-610.


Regards,
Jeff

On Wednesday, July 25, 2012 6:20:53 PM UTC-4, Alexey Rudkovskiy wrote:
Jeff, how critical is the finalize() in com.mongodb.DBApiLayer$Result? It seems to be a performance-degrading method, plus as far as I am aware finalize is not considered reliable anyway. Can we safely remove it?

A.
Reply all
Reply to author
Forward
0 new messages