MongoDb in GCE - Limiting memory on insert

60 views
Skip to first unread message

Boas Enkler

unread,
Mar 5, 2018, 6:09:22 PM3/5/18
to mongodb-user

In googles Kubernetes we ha mongodb running. A api is currently offering a method which is recevieng databatches which should be inserted in the mongodb,
The mongodb runs in kubernetes on 3 Nodes. each having 3.75 GB memory. 

Now it happens very often then when running such a batch import that we run into out of memory issues.

The cluster tried to allocate about 8GB of RAM.

The question is what is the correct way to limit the memory usage? The batch import isn't that performance critical. 
We set the wiredTigerCacheSizeGB to 1.5 (GB) but this didn't help.


Our code inserting the data looks like this:


            IMongoCollection<T> stageCollection = Database.GetCollection<T>(StageName);

            foreach (var batch in entites.Batch(1000))
            {
                await stageCollection.InsertManyAsync(batch);
            }

There is one unique combined index like this:

collection.Indexes.CreateOneAsync(IndexKeys
                    .Ascending(x => x.DepatureCityId)
                    .Ascending(x => x.ArrivalCityId),
                new CreateIndexOptions
                {
                    Unique = true,
                    Background = true
                });


Is there anyadvice on how to limit the memory so that our nodes don't run im ooms?

Here our yaml configuration for the mongodatabase

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 3
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo:3.6
command:
- mongod
- "--replSet"
- rs0
- "--bind_ip"
- "0.0.0.0"
- "--noprealloc"
- "--wiredTigerCacheSizeGB"
- "1.5"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "fast"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 32Gi

Boas Enkler

unread,
Mar 6, 2018, 7:41:43 AM3/6/18
to mongodb-user
In the meanwhile i also set the pod antiaffinity to other pods make sure that no other pod is running on the given node.
Reply all
Reply to author
Forward
0 new messages