GC overhead limit exceeded

560 views
Skip to first unread message

tiny657

unread,
Aug 31, 2020, 9:09:51 PM8/31/20
to Druid User
Hi,
Got out of memory even though my data volume is not that big compared to the server.

Error Message: Terminating due to java.lang.OutOfMemoryError: GC overhead limit exceeded


Installed druid on a single machine for i3.4xlarge and launched by bin/start-medium
-----
Medium: 16 CPU, 128GB RAM (~i3.4xlarge)
  • Launch command: bin/start-medium
  • Configuration directory: conf/druid/single-server/medium


Here is the default jvm config for medium I used.
- broker: -Xms8g -Xmx8g -XX:MaxDirectMemorySize=5g
- coordinator-overlord: -Xms9g -Xmx9g
- historical: -Xms8g -Xmx8g -XX:MaxDirectMemorySize=13g
- middleManager: -Xms256m -Xmx256m
- router: -Xms512m -Xmx512, -XX:MaxDirectMemorySize=128m


Also, I tried to increase the memory in jvm config.  But got the same Out of Memory error.
- broker: -Xms32g -Xmx32g -XX:MaxDirectMemorySize=20g
- coordinator-overlord: -Xms36g -Xmx 36g
- historical: -Xms32g -Xmx32g -XX:MaxDirectMemorySize=52g
- middleManager: -Xms10g -Xmx10g
- router: -Xms2g -Xmx2g -XX:MaxDirectMemorySize=512m


Here is the data example I used.
Tried to ingest 1.5G bytes (25M rows)
Screen Shot 2020-08-31 at 2.55.37 PM.png

Ingestion spec:

{

  "type": "index_parallel",

  "spec": {

    "dataSchema": {

      "dataSource": “table_name”,

      "dimensionsSpec" : {

        "dimensions" : ["test_option", "abtest_metric_id", "dimension_name", "dimension_value", "event_value", "user_id"]

      },

      "timestampSpec": {

        "column": "dt",

        "format": "yyyyMMdd",

        "missingValue": "20200822"

      },

"metricsSpec": [ { "type": "count", "name": "count" } ],

      "granularitySpec": {

        "segmentGranularity": "day",

        "queryGranularity": "none"

      }

    },

    "ioConfig": {

      "type": "index_parallel",

      "inputSource": {

        "type": "s3",

        "prefixes": ["s3://buckets/"]

      },

      "inputFormat": {

        "type": "parquet"

      }

    },

    "tuningConfig": {

        "type": "index_parallel",

        "maxNumConcurrentSubTasks": 20

    }

  }

}



Full Error Log:

2020-09-01T00:51:14,974 INFO [main] org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=localhost sessionTimeout=30000 watcher=org.apache.curator.ConnectionState@55d99dc3
2020-09-01T00:51:15,035 INFO [main] org.apache.curator.framework.imps.CuratorFrameworkImpl - Default schema
2020-09-01T00:51:15,045 INFO [main-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2020-09-01T00:51:15,054 INFO [main-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn - Socket connection established to localhost/127.0.0.1:2181, initiating session
2020-09-01T00:51:15,077 INFO [main-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1000f97e8a80014, negotiated timeout = 30000
2020-09-01T00:51:15,083 INFO [main-EventThread] org.apache.curator.framework.state.ConnectionStateManager - State change: CONNECTED
2020-09-01T00:51:15,206 INFO [NodeRoleWatcher[COORDINATOR]] org.apache.druid.curator.discovery.CuratorDruidNodeDiscoveryProvider$NodeRoleWatcher - Node[http://localhost:8081] of role[coordinator] detected.
2020-09-01T00:51:15,206 INFO [NodeRoleWatcher[OVERLORD]] org.apache.druid.curator.discovery.CuratorDruidNodeDiscoveryProvider$NodeRoleWatcher - Node[http://localhost:8081] of role[overlord] detected.
2020-09-01T00:51:15,206 INFO [NodeRoleWatcher[COORDINATOR]] org.apache.druid.curator.discovery.CuratorDruidNodeDiscoveryProvider$NodeRoleWatcher - Node watcher of role[coordinator] is now initialized.
2020-09-01T00:51:15,206 INFO [NodeRoleWatcher[OVERLORD]] org.apache.druid.curator.discovery.CuratorDruidNodeDiscoveryProvider$NodeRoleWatcher - Node watcher of role[overlord] is now initialized.
2020-09-01T00:51:15,378 INFO [main] org.apache.druid.indexing.worker.executor.ExecutorLifecycle - Running with task: {
  "type" : "single_phase_sub_task",
  "id" : "single_phase_sub_task_maxRetry3_biinapgd_2020-09-01T00:49:33.063Z",
  "groupId" : "index_parallel_maxRetry3_pgibohji_2020-09-01T00:45:45.039Z",
  "resource" : {
    "availabilityGroup" : "single_phase_sub_task_maxRetry3_biinapgd_2020-09-01T00:49:33.063Z",
    "requiredCapacity" : 1
  },
  "supervisorTaskId" : "index_parallel_maxRetry3_pgibohji_2020-09-01T00:45:45.039Z",
  "numAttempts" : 2,
  "spec" : {
    "dataSchema" : {
      "dataSource" : "maxRetry3",
      "timestampSpec" : {
        "column" : "dt",
        "format" : "yyyyMMdd",
        "missingValue" : "20200822-01-01T00:00:00.000Z"
      },
      "dimensionsSpec" : {
        "dimensions" : [ {
          "type" : "string",
          "name" : "test_option",
          "multiValueHandling" : "SORTED_ARRAY",
          "createBitmapIndex" : true
        }, {
          "type" : "string",
          "name" : "abtest_metric_id",
          "multiValueHandling" : "SORTED_ARRAY",
          "createBitmapIndex" : true
        }, {
          "type" : "string",
          "name" : "dimension_name",
          "multiValueHandling" : "SORTED_ARRAY",
          "createBitmapIndex" : true
        }, {
          "type" : "string",
          "name" : "dimension_value",
          "multiValueHandling" : "SORTED_ARRAY",
          "createBitmapIndex" : true
        }, {
          "type" : "string",
          "name" : "event_value",
          "multiValueHandling" : "SORTED_ARRAY",
          "createBitmapIndex" : true
        }, {
          "type" : "string",
          "name" : "user_id",
          "multiValueHandling" : "SORTED_ARRAY",
          "createBitmapIndex" : true
        } ],
        "dimensionExclusions" : [ "dt", "count" ]
      },
      "metricsSpec" : [ {
        "type" : "count",
        "name" : "count"
      } ],
      "granularitySpec" : {
        "type" : "uniform",
        "segmentGranularity" : "DAY",
        "queryGranularity" : {
          "type" : "none"
        },
        "rollup" : true,
        "intervals" : null
      },
      "transformSpec" : {
        "filter" : null,
        "transforms" : [ ]
      }
    },
    "ioConfig" : {
      "type" : "index_parallel",
      "inputSource" : {
        "type" : "s3",
        "uris" : null,
        "prefixes" : null,
        "objects" : [ {
          "bucket" : "s3-abtest-prod",
          "path" : "xpe/daily_unique/area_metric_type=Search/abtest_instance_id=34554/dt=20200822/part-00006-ef28c0f9-f2d4-4f7a-84a9-1e36aea39816-c000.snappy.parquet"
        }, {
          "bucket" : "s3-abtest-prod",
          "path" : "xpe/daily_unique/area_metric_type=Search/abtest_instance_id=34554/dt=20200822/part-00006-ef28c0f9-f2d4-4f7a-84a9-1e36aea39816-c001.snappy.parquet"
        }, {
          "bucket" : "s3-abtest-prod",
          "path" : "xpe/daily_unique/area_metric_type=Search/abtest_instance_id=34554/dt=20200822/part-00007-eccc7759-c75a-43c8-817b-2fa2076967d8-c000.snappy.parquet"
        }, {
          "bucket" : "s3-abtest-prod",
          "path" : "xpe/daily_unique/area_metric_type=Search/abtest_instance_id=34554/dt=20200822/part-00007-ef28c0f9-f2d4-4f7a-84a9-1e36aea39816-c000.snappy.parquet"
        }, {
          "bucket" : "s3-abtest-prod",
          "path" : "xpe/daily_unique/area_metric_type=Search/abtest_instance_id=34554/dt=20200822/part-00007-ef28c0f9-f2d4-4f7a-84a9-1e36aea39816-c001.snappy.parquet"
        }, {
          "bucket" : "s3-abtest-prod",
          "path" : "xpe/daily_unique/area_metric_type=Search/abtest_instance_id=34554/dt=20200822/part-00008-eccc7759-c75a-43c8-817b-2fa2076967d8-c000.snappy.parquet"
        }, {
          "bucket" : "s3-abtest-prod",
          "path" : "xpe/daily_unique/area_metric_type=Search/abtest_instance_id=34554/dt=20200822/part-00008-ef28c0f9-f2d4-4f7a-84a9-1e36aea39816-c000.snappy.parquet"
        }, {
          "bucket" : "s3-abtest-prod",
          "path" : "xpe/daily_unique/area_metric_type=Search/abtest_instance_id=34554/dt=20200822/part-00008-ef28c0f9-f2d4-4f7a-84a9-1e36aea39816-c001.snappy.parquet"
        }, {
          "bucket" : "s3-abtest-prod",
          "path" : "xpe/daily_unique/area_metric_type=Search/abtest_instance_id=34554/dt=20200822/part-00009-eccc7759-c75a-43c8-817b-2fa2076967d8-c000.snappy.parquet"
        } ],
        "properties" : null
      },
      "inputFormat" : {
        "type" : "parquet",
        "flattenSpec" : {
          "useFieldDiscovery" : true,
          "fields" : [ ]
        },
        "binaryAsString" : false
      },
      "appendToExisting" : false
    },
    "tuningConfig" : {
      "type" : "index_parallel",
      "maxRowsPerSegment" : null,
      "maxRowsInMemory" : 1000000,
      "maxBytesInMemory" : 0,
      "maxTotalRows" : null,
      "numShards" : null,
      "splitHintSpec" : null,
      "partitionsSpec" : null,
      "indexSpec" : {
        "bitmap" : {
          "type" : "roaring",
          "compressRunOnSerialization" : true
        },
        "dimensionCompression" : "lz4",
        "metricCompression" : "lz4",
        "longEncoding" : "longs",
        "segmentLoader" : null
      },
      "indexSpecForIntermediatePersists" : {
        "bitmap" : {
          "type" : "roaring",
          "compressRunOnSerialization" : true
        },
        "dimensionCompression" : "lz4",
        "metricCompression" : "lz4",
        "longEncoding" : "longs",
        "segmentLoader" : null
      },
      "maxPendingPersists" : 0,
      "forceGuaranteedRollup" : false,
      "reportParseExceptions" : false,
      "pushTimeout" : 0,
      "segmentWriteOutMediumFactory" : null,
      "maxNumConcurrentSubTasks" : 20,
      "maxRetry" : 3,
      "taskStatusCheckPeriodMs" : 1000,
      "chatHandlerTimeout" : "PT10S",
      "chatHandlerNumRetries" : 5,
      "maxNumSegmentsToMerge" : 100,
      "totalNumMergeTasks" : 10,
      "logParseExceptions" : false,
      "maxParseExceptions" : 2147483647,
      "maxSavedParseExceptions" : 0,
      "buildV9Directly" : true,
      "partitionDimensions" : [ ]
    }
  },
  "context" : {
    "forceTimeChunkLock" : true
  },
  "dataSource" : "maxRetry3"
}
2020-09-01T00:51:15,380 INFO [main] org.apache.druid.indexing.worker.executor.ExecutorLifecycle - Attempting to lock file[var/druid/task/single_phase_sub_task_maxRetry3_biinapgd_2020-09-01T00:49:33.063Z/lock].
2020-09-01T00:51:15,387 INFO [main] org.apache.druid.indexing.worker.executor.ExecutorLifecycle - Acquired lock file[var/druid/task/single_phase_sub_task_maxRetry3_biinapgd_2020-09-01T00:49:33.063Z/lock] in 2ms.
2020-09-01T00:51:15,394 INFO [main] org.apache.druid.indexing.common.task.AbstractBatchIndexTask - [forceTimeChunkLock] is set to true in task context. Use timeChunk lock
2020-09-01T00:51:15,415 INFO [task-runner-0-priority-0] org.apache.druid.indexing.overlord.SingleTaskBackgroundRunner - Running task: single_phase_sub_task_maxRetry3_biinapgd_2020-09-01T00:49:33.063Z
2020-09-01T00:51:15,418 WARN [task-runner-0-priority-0] org.apache.druid.indexing.common.task.batch.parallel.SinglePhaseSubTask - Intervals are missing in granularitySpec while this task is potentially overwriting existing segments. Forced to use timeChunk lock.
2020-09-01T00:51:15,421 INFO [main] org.apache.druid.java.util.common.lifecycle.Lifecycle - Starting lifecycle [module] stage [SERVER]
2020-09-01T00:51:15,425 INFO [main] org.eclipse.jetty.server.Server - jetty-9.4.12.v20180830; built: 2018-08-30T13:59:14.071Z; git: 27208684755d94a92186989f695db2d7b21ebc51; jvm 1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
2020-09-01T00:51:15,740 INFO [main] org.eclipse.jetty.server.session - DefaultSessionIdManager workerName=node0
2020-09-01T00:51:15,740 INFO [main] org.eclipse.jetty.server.session - No SessionScavenger set, using defaults
2020-09-01T00:51:15,742 INFO [main] org.eclipse.jetty.server.session - node0 Scavenging every 600000ms
2020-09-01T00:51:16,086 INFO [main] com.sun.jersey.server.impl.application.WebApplicationImpl - Initiating Jersey application, version 'Jersey: 1.19.3 10/24/2016 03:43 PM'
2020-09-01T00:51:18,040 INFO [main] org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@2bf4fa1{/,null,AVAILABLE}
2020-09-01T00:51:18,111 INFO [main] org.eclipse.jetty.server.AbstractConnector - Started ServerConnector@40aad17d{HTTP/1.1,[http/1.1]}{0.0.0.0:8101}
2020-09-01T00:51:18,112 INFO [main] org.eclipse.jetty.server.Server - Started @16126ms
2020-09-01T00:51:18,115 INFO [main] org.apache.druid.java.util.common.lifecycle.Lifecycle - Starting lifecycle [module] stage [ANNOUNCEMENTS]
2020-09-01T00:51:18,115 INFO [main] org.apache.druid.java.util.common.lifecycle.Lifecycle - Successfully started lifecycle [module]
2020-09-01T00:51:21,011 INFO [task-runner-0-priority-0] org.apache.parquet.hadoop.InternalParquetRecordReader - RecordReader initialized will read a total of 2000000 records.
2020-09-01T00:51:21,011 INFO [task-runner-0-priority-0] org.apache.parquet.hadoop.InternalParquetRecordReader - at row 0. reading next block
2020-09-01T00:51:21,134 INFO [task-runner-0-priority-0] org.apache.hadoop.io.compress.CodecPool - Got brand-new decompressor [.snappy]
2020-09-01T00:51:21,150 INFO [task-runner-0-priority-0] org.apache.parquet.hadoop.InternalParquetRecordReader - block read in memory in 139 ms. row count = 2000000
2020-09-01T00:51:21,779 INFO [task-runner-0-priority-0] org.apache.druid.segment.realtime.appenderator.BaseAppenderatorDriver - New segment[maxRetry3_20200822-01-01T00:00:00.000Z_20200822-01-02T00:00:00.000Z_2020-09-01T00:46:01.484Z_8] for sequenceName[single_phase_sub_task_maxRetry3_biinapgd_2020-09-01T00:49:33.063Z].
2020-09-01T00:51:39,503 INFO [task-runner-0-priority-0] org.apache.druid.segment.realtime.appenderator.AppenderatorImpl - Flushing in-memory data to disk because No more rows can be appended to sink,bytesCurrentlyInMemory[171529596] is greater than maxBytesInMemory[171529557].
2020-09-01T00:52:24,863 WARN [main-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn - Client session timed out, have not heard from server in 26076ms for sessionid 0x1000f97e8a80014
2020-09-01T00:52:24,864 INFO [main-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn - Client session timed out, have not heard from server in 26076ms for sessionid 0x1000f97e8a80014, closing socket connection and attempting reconnect
2020-09-01T00:52:29,704 INFO [main-EventThread] org.apache.curator.framework.state.ConnectionStateManager - State change: SUSPENDED
2020-09-01T00:52:29,705 WARN [NodeRoleWatcher[OVERLORD]] org.apache.druid.curator.discovery.CuratorDruidNodeDiscoveryProvider$NodeRoleWatcher - Ignored event type[CONNECTION_SUSPENDED] for node watcher of role[overlord].
2020-09-01T00:52:29,706 WARN [NodeRoleWatcher[COORDINATOR]] org.apache.druid.curator.discovery.CuratorDruidNodeDiscoveryProvider$NodeRoleWatcher - Ignored event type[CONNECTION_SUSPENDED] for node watcher of role[coordinator].
2020-09-01T00:52:31,883 INFO [main-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2020-09-01T00:52:31,884 INFO [main-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn - Socket connection established to localhost/127.0.0.1:2181, initiating session
2020-09-01T00:52:31,886 WARN [main-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn - Unable to reconnect to ZooKeeper service, session 0x1000f97e8a80014 has expired
2020-09-01T00:52:31,886 INFO [main-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn - Unable to reconnect to ZooKeeper service, session 0x1000f97e8a80014 has expired, closing socket connection
2020-09-01T00:52:31,886 WARN [main-EventThread] org.apache.curator.ConnectionState - Session expired event received
2020-09-01T00:52:31,887 INFO [main-EventThread] org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=localhost sessionTimeout=30000 watcher=org.apache.curator.ConnectionState@55d99dc3
2020-09-01T00:52:34,162 INFO [main-EventThread] org.apache.curator.framework.state.ConnectionStateManager - State change: LOST
2020-09-01T00:52:34,162 INFO [main-EventThread] org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x1000f97e8a80014
2020-09-01T00:52:34,164 WARN [NodeRoleWatcher[OVERLORD]] org.apache.druid.curator.discovery.CuratorDruidNodeDiscoveryProvider$NodeRoleWatcher - Ignored event type[CONNECTION_LOST] for node watcher of role[overlord].
2020-09-01T00:52:34,164 WARN [NodeRoleWatcher[COORDINATOR]] org.apache.druid.curator.discovery.CuratorDruidNodeDiscoveryProvider$NodeRoleWatcher - Ignored event type[CONNECTION_LOST] for node watcher of role[coordinator].
2020-09-01T00:52:36,705 INFO [main-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2020-09-01T00:52:36,705 INFO [main-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn - Socket connection established to localhost/127.0.0.1:2181, initiating session
2020-09-01T00:52:36,709 INFO [main-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1000f97e8a80017, negotiated timeout = 30000
2020-09-01T00:52:36,709 INFO [main-EventThread] org.apache.curator.framework.state.ConnectionStateManager - State change: RECONNECTED
2020-09-01T00:52:36,712 WARN [NodeRoleWatcher[OVERLORD]] org.apache.druid.curator.discovery.CuratorDruidNodeDiscoveryProvider$NodeRoleWatcher - Ignored event type[CONNECTION_RECONNECTED] for node watcher of role[overlord].
2020-09-01T00:52:36,716 WARN [NodeRoleWatcher[COORDINATOR]] org.apache.druid.curator.discovery.CuratorDruidNodeDiscoveryProvider$NodeRoleWatcher - Ignored event type[CONNECTION_RECONNECTED] for node watcher of role[coordinator].
Terminating due to java.lang.OutOfMemoryError: GC overhead limit exceeded

tiny657

unread,
Aug 31, 2020, 9:22:17 PM8/31/20
to Druid User
And while ingesting data from S3, CPU usage is 99%.
Is it normal?

Screen Shot 2020-08-31 at 5.52.41 PM.png


2020년 8월 31일 월요일 오후 6시 9분 51초 UTC-7에 tiny657님이 작성:

Rachel Pedreschi

unread,
Aug 31, 2020, 9:30:53 PM8/31/20
to druid...@googlegroups.com
This doc may help:   https://druid.apache.org/docs/latest/ingestion/native-batch.html#parallel-task

I'd start with a config that uses less resources, and make sure that can run, then start to step up to utilize more of the resources.  Also, what version of Druid are you using?

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/39523540-23bf-4d68-92d9-afa9015a2465n%40googlegroups.com.


--
Rachel Pedreschi
VP Developer Relations and Community
Imply.io

youngwan lim

unread,
Aug 31, 2020, 9:33:21 PM8/31/20
to druid...@googlegroups.com
I am using the latest version.

You received this message because you are subscribed to a topic in the Google Groups "Druid User" group.


To unsubscribe from this topic, visit https://groups.google.com/d/topic/druid-user/G4lfvoz_aH4/unsubscribe.


To unsubscribe from this group and all its topics, send an email to druid-user+...@googlegroups.com.


To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/CAOM3wZJBPmpLtP7bnLc5E2PkXWFzEFi1qckZcZ1Q2mq7_3eVew%40mail.gmail.com.


Jay R

unread,
Sep 1, 2020, 12:52:59 AM9/1/20
to Druid User
@tiny657

Did you change the runtime.properties too?.
while ingestion in progress, ssh to middle manager node and check exactly how much Xms/Xmx the sessions is using.(It may be using default Xms1g). 

ps -ef| grep Xmas
ps -ef| grep druid

after changing the config files, apply the changes to the system before ingestion.

systemctl daemon-reload  (cent os)

Regards,
Jay R

youngwan lim

unread,
Sep 1, 2020, 1:00:07 AM9/1/20
to druid...@googlegroups.com
Hi Jay,
Thanks for your reply.
I did not touch runtime.properties.
So middleManager is using ` -Xms1g -Xmx1g` while ingesting.

Do you have any recommend heap size?

Regards,

Jay R

unread,
Sep 1, 2020, 1:13:05 AM9/1/20
to Druid User
  1. How many tasks spawned?. If its n,  then each task can have Xmx = memory allocated for middle manager/n. Try increasing the Xmx in run time properties.
  2. Since the machine you used as only 16CPUs, decrease the maxNumConcurrentSubTasks to 8 (after exlcuding other processes needs).

youngwan lim

unread,
Sep 1, 2020, 1:25:41 AM9/1/20
to druid...@googlegroups.com
Thanks.
After replacing runtime.config with -Xms5g -Xmx5g, there is no more OOM.

I am using `druid.worker.capacity=4`

youngwan lim

unread,
Sep 1, 2020, 1:34:12 AM9/1/20
to druid...@googlegroups.com
One more quick question:
- After changing druid.worker.capacity from 4 to 8, 4 workers are still running in parallel for ingestion.

Should I change the other config to increase the worker size?

Jay R

unread,
Sep 2, 2020, 10:22:55 AM9/2/20
to Druid User

Since it is single node, there will be few cores availble for middle manager,after excluding cores for other processes.
Then maxNumConcurrentSubTasks drives it with the above availble CPU cores.
Reply all
Reply to author
Forward
0 new messages