Slice of interfaces taking up so much memory

107 views
Skip to first unread message

Gowtham Raj

unread,
Aug 26, 2024, 11:31:39 AMAug 26
to golang-nuts
Hello team,

MongoDB Go bulk write takes up an slice of interfaces which is slice of operations.

While doing so I create a empty slice and add operations to it. When the length of slice is > 10K, I do a bulk write to DB.

The problem here is this script eats more memory as it runs, I suspect issue is with the slice which gets appended periodically. I try to reset the slice but it never gets freed up. Alloc in runtime.MemStats grows to 1 GB immediately within 1 minute.

Any idea as to who to release the memory of slice every 10k docs ?

The length and cap of slice goes back to zero, but GC does not clean it up. Not sure if since its in the for loop the variable is still being referenced.

How do solve this problem ?

Regards,

Gowtham

robert engels

unread,
Aug 26, 2024, 11:45:46 AMAug 26
to Gowtham Raj, golang-nuts
It is working as expected. You allocate models to hold 10k elements - you never put more than 10k elements in it, so it will not consume additional memory. (You should probably make the comparison == 10000, not > 10000 to avoid the extra array allocation.

GC is non deterministic - so it will only free memory if/when it needs to. I would perform a heap dump to understand exactly what is consuming the memory if it is continually increasing - I suspect it may be in the MongoDB driver - or it is just dead objects waiting to be collected.

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/0be9320a-e004-49b0-a88f-79158c1ed0d2n%40googlegroups.com.

robert engels

unread,
Aug 26, 2024, 11:49:25 AMAug 26
to Gowtham Raj, golang-nuts
actually, change it to models = models[0:0], not models=models[0:0:0], otherwise you are causing new arrays to be allocated on each loop.

robert engels

unread,
Aug 26, 2024, 11:57:58 AMAug 26
to Gowtham Raj, golang-nuts
Actually, this is interesting, since it requires the new capacity to be less than the existing capacity, I surmise it shares the underlying array, but since the new capacity is 0, as soon as you append it is going to allocate a new array - which you don’t need or want.

robert engels

unread,
Aug 26, 2024, 12:31:30 PMAug 26
to Gowtham Raj, golang-nuts
Then you need to do a heap dump to figure it out…

The MongoDB db api docs don’t specifically state that the call is synchronous https://www.mongodb.com/docs/manual/core/bulk-write-operations/ but I suspect it is.

You might see if you get different behavior if you turn on ordered writes.


On Aug 26, 2024, at 11:11 AM, Gowtham Raj <rgowt...@gmail.com> wrote:

I changed and models[:0] behaves the same. memory keeps on increasing and never gets released to OS.

robert engels

unread,
Aug 26, 2024, 12:37:33 PMAug 26
to Gowtham Raj, golang-nuts
You don’t have ordered writes:

_, err := mongoClient.BulkWrite(context.Background(),models, options.BulkWrite().SetOrdered(false))

you need to do a heap dump

see https://medium.com/@luanrubensf/heap-dump-in-go-using-pprof-ae9837e05419

On Aug 26, 2024, at 11:33 AM, Gowtham Raj <rgowt...@gmail.com> wrote:

I do have ordered writes. I dont want to create a array every time. I tried using models[:0] but still memory does not come down.
Reply all
Reply to author
Forward
0 new messages