When using thread isolation strategy of hystrix, internally it
creates a
ThreadPoolExecutor. Hystrix
passes the coreSize property to both corePoolSize and maxPoolSize of ThreadPoolExecutor. Hence it ends up creating and keeping "coreSize" number of threads always. Keep alive of ThreadPoolExecutor kicks in only if there are more than corePoolSize number of threads. Hence these threads never get terminated.
With each command execution, it creates a new thread till it reaches the maximum 10. Once 10 threads are spawned, they never get terminated even if they are not used (observed over jmx console and by taking thread dump).
Implication:
A middle ware system that has to aggregate data from many micro services (in a SOA), isolates interactions to each of these services (over network calls) using hystrix thread pools. When we have 100's of such dependencies or micro services to be called, even with a default thread pool size of 10, we end up creating 1000 hystrix threads which never get terminated (say at times when there are low requests to particular dependencies, those threads need not be active always).
Potential solutions:
1. Expose maxPoolSize to be configured and keep corePoolSize and maxPoolSize different to allow growing and shrinking of thread pools dynamically.
2. Use allowCoreThreadTimeout of ThreadPoolExecutor to timeout even core threads
Would like to understand the real motive behind using the coreSize for both corePoolSize and maxPoolSize of ThreadPoolExecutor?