MongoDB slowed down in tcmalloc

150 views
Skip to first unread message

Eric Camachat

unread,
Apr 13, 2016, 3:12:48 PM4/13/16
to mongodb-user
Hi,

I tested our system on EC2 r3.8xlarge instances, after running 3days it became very slow.
Average CPUs/RAM/disks are in low usage, but I saw random one of the CPUs will become 100% for couple seconds.
So I used "perf top" to monitor a CPU and it showed 23.57% of CPU was spent in tcmalloc::CentralFreeList::FetchFromOneSpans() function of mongod process.
The application just collects data from IoT devices and update/insert of their currenct/historical state.

Does it because of $addToSet operation? or any other operation will let mongod stick in tcmalloc.

Thanks,
Eric

Kevin Adistambha

unread,
Apr 29, 2016, 4:29:16 AM4/29/16
to mongodb-user

Hi Eric,

I tested our system on EC2 r3.8xlarge instances, after running 3days it became very slow.

Could you clarify what you mean by slow and how you are measuring this?

Average CPUs/RAM/disks are in low usage, but I saw random one of the CPUs will become 100% for couple seconds.

CPU usage could be related to index builds, document updates, aggregation queries, compression/decompression of documents in the WiredTiger storage engine, or any other normal operations. Is there any pattern to the CPU spikes that you can relate to some specific operation?

Could you please specify:

  • your operating system and MongoDB version
  • the storage engine used (MMAPv1 or WiredTiger)
  • whether the system is running a single mongod, or if there are multiple mongod running on the server
  • whether there are other processes running in the system that could create a resource contention (e.g. other database servers, web servers, etc.)

Does it because of $addToSet operation? or any other operation will let mongod stick in tcmalloc.

Why do you suspect $addToSet is the cause?

Knowing a little about your use case might help:

  • can you provide an example document?
  • how many elements are typically in your arrays when you use $addToSet?
  • can you provide example output for slow queries (log lines and ideally query with explain(true) )?

Also, may I ask what tooling you use to monitor the performance of your MongoDB deployment? You might want to check out MongoDB Cloud Manager, which collects detailed performance metrics. Note: Cloud Manager is a freemium service with a 30-day free trial period.

Best regards,
Kevin

Eric Camachat

unread,
May 2, 2016, 5:18:58 PM5/2/16
to mongodb-user


On Friday, April 29, 2016 at 1:29:16 AM UTC-7, Kevin Adistambha wrote:

Hi Eric,

I tested our system on EC2 r3.8xlarge instances, after running 3days it became very slow.

Could you clarify what you mean by slow and how you are measuring this?

An aggregation takes tens or hundreds of seconds, compare to default profiling slow means (100ms).

Average CPUs/RAM/disks are in low usage, but I saw random one of the CPUs will become 100% for couple seconds.

CPU usage could be related to index builds, document updates, aggregation queries, compression/decompression of documents in the WiredTiger storage engine, or any other normal operations. Is there any pattern to the CPU spikes that you can relate to some specific operation?

Could you please specify:

  • your operating system and MongoDB version
  • the storage engine used (MMAPv1 or WiredTiger)
  • whether the system is running a single mongod, or if there are multiple mongod running on the server
  • whether there are other processes running in the system that could create a resource contention (e.g. other database servers, web servers, etc.)
A single mongod v3.2.4 from http://repo.mongodb.org/apt/ubuntu/dists/trusty/ all defaults as sharding nodes.
And a single config server.
  •  

Does it because of $addToSet operation? or any other operation will let mongod stick in tcmalloc.

Why do you suspect $addToSet is the cause?

After googled, looks like $addToSet is a slower operation. 

Knowing a little about your use case might help:

  • can you provide an example document?
  • how many elements are typically in your arrays when you use $addToSet?
Each document has 3 arrays, each update will $addToSet to those 3 arrays.
And historical hourly document has per minute sub-documents also, each update will $addToSet to 3+3=6 arrays.
  • can you provide example output for slow queries (log lines and ideally query with explain(true) )?

Also, may I ask what tooling you use to monitor the performance of your MongoDB deployment? You might want to check out MongoDB Cloud Manager, which collects detailed performance metrics. Note: Cloud Manager is a freemium service with a 30-day free trial period.

We planed to separate insert/update current (keep them in cache) and read/aggregation (looks like read/aggregation will impact insert/update) into difference database instances.
In our test, this works fine so far.
For insert/update instance set storage.wiredTiger.engineConfig.cacheSizeGB to 1 GB to get more stable response time, even VFS will use all remain RAM in 3 days, still good response time after weeks.


Best regards,
Kevin

Reply all
Reply to author
Forward
0 new messages