grpc executor threads

2,742 views
Skip to first unread message

Jeff Steger

unread,
Jan 2, 2022, 12:39:04 PM1/2/22
to grpc.io
grpc-java has a method in its ServerBuilder class to set the Executor. Is there similar functionality for grpc-c++ ? I am running a C++ grpc server and the number of executor threads it spawns is high and seems to never decrease, even when connections stop. 

Mark D. Roth

unread,
Jan 4, 2022, 11:55:09 AM1/4/22
to Jeff Steger, grpc.io
I answered this in the other thread you posted on.

On Sun, Jan 2, 2022 at 9:39 AM Jeff Steger <be2...@gmail.com> wrote:
grpc-java has a method in its ServerBuilder class to set the Executor. Is there similar functionality for grpc-c++ ? I am running a C++ grpc server and the number of executor threads it spawns is high and seems to never decrease, even when connections stop. 

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/CAA-WHunWvX5Tr6Vp3e-E6vcKgzD%3DGzsCNoZzqNNQ8Ox8BZvggA%40mail.gmail.com.


--
Mark D. Roth <ro...@google.com>
Software Engineer
Google, Inc.

Jeff Steger

unread,
Jan 5, 2022, 3:51:21 PM1/5/22
to Mark D. Roth, grpc.io
Can you specifically answer this:

grpc-java has a method in its ServerBuilder class to set the Executor. Is there similar functionality for grpc-c++ ?

Thanks!

Jeff Steger

unread,
Jan 5, 2022, 3:54:20 PM1/5/22
to Mark D. Roth, grpc.io
Ah never mind I see you answered, apologies. Let me ask you this: am I stuck with all of these default-executor threads that my process is spawning? Is there no way to limit them? Do they come from same pool as grpc sync server threads?

Mark D. Roth

unread,
Jan 6, 2022, 3:39:49 PM1/6/22
to Jeff Steger, grpc.io
The C++ sync server has one thread pool for both polling and request handlers. When a request comes in, an existing polling thread basically becomes a request handler thread, and when the request handler completes, that thread is available to become a polling thread again. The MIN_POLLERS and MAX_POLLERS options (which can be set via ServerBuilder::SetSyncServerOption()) allow tuning the number of threads that are used for polling: when a polling thread becomes a request handler thread, if there are not enough polling threads remaining, a new one will be spawned, and when a request handler finishes, if there are too many polling threads, the thread will terminate.

Jeff Steger

unread,
Jan 6, 2022, 10:05:30 PM1/6/22
to Mark D. Roth, grpc.io

Thanks for the info! It sort of sounds like the default-executor threads and the sync-server threads come from the same thread pool. I know that ServerBuilder::SetResourceQuota sets the max number sync-server threads. Does it have any impact on the number of default-executor threads? It doesn't seem to.

Also, can you tell me under what conditions a process can expect to wind up with dozens of sleeping default-executor threads? My process doesn't start off like that but after a few hours thats how it winds up. It is not under high load, just serving a few short-lived streaming-response queries every few seconds. If I limit the max number of these threads to lets say 10, do you anticipate that connections would be refused under these conditions (ie serving a few short-lived streaming-response queries every few seconds)?

Last, is it safe to manually kill these sleeping threads? They seem to be blocked on a condition variable. 

Thanks!

-Jeff

Mark D. Roth

unread,
Jan 7, 2022, 11:14:43 AM1/7/22
to Jeff Steger, grpc.io
Oh, sorry, I thought you were asking about the sync server threads.  The default-executor threads sound like threads that are spawned internally inside of C-core for things like synchronous DNS resolution; those should be completely unrelated to the sync server threads.  I'm not sure what would cause those threads to pile up.

Try running with the env vars GRPC_VERBOSITY=DEBUG GRPC_TRACE=executor and see if that yields any useful log information.  In particular, try running that with a debug build, since that will add additional information about where in the code the closures on the executor threads are coming from.

Jeff Steger

unread,
Jan 7, 2022, 3:47:51 PM1/7/22
to Mark D. Roth, grpc.io
Thanks Mark, I will turn on trace and see if I see anything odd. I was reading about a function called Executor::SetThreadingDefault(bool enable) that I think I can safely call after i create my grpc server. It is a public function and seems to allow me to toggle between a threaded implementation and an async one. Is that accurate? Is calling this function safe to do and/or recommended (or at least not contra-recommended). Thanks again for your help!

Jeff

Mark D. Roth

unread,
Jan 7, 2022, 6:02:08 PM1/7/22
to Jeff Steger, grpc.io
No, that's not a public API, and you should not call it directly.  (It may be public in the class, but the class is not part of the gRPC public API.)

Jiqing Tang

unread,
May 12, 2023, 5:58:39 PM5/12/23
to grpc.io
Hi Jeff and Mark,

I just ran into the same issue with an async c++ GRPC server (version 1.37.1), was curious about these default-executo threads and then got this thread, did you guys figure out what these threads are for? The number seems to be about 2x of the polling worker threads.

Thanks!

Jeff Steger

unread,
May 12, 2023, 6:59:28 PM5/12/23
to Jiqing Tang, grpc.io
This is as close to an explanation as I have found:

look at sreecha’s response in
tldr: 
 The max number of threads can be 2x the number cores and unfortunately its not configurable at the moment….. any executor threads and timer-manager you see are by-design; unless the threads are more than 2x the number of cores on your machine in which case it is clearly a bug”


From my observation of the thread count and from my examination of the grpc code (which I admit I performed some years ago), it is evident to me that the grpc framework spawns threads up to 2x the number of hardware cores. It will spawn a new thread if an existing thread in its threadpool is busy iirc. The issue is that the grpc framework never reaps idle threads. Once a thread is created, it is there for the lifetime if the grpc server. There is no way to configure the max number of threads either. It is really imo a sloppy design. threads aren’t free and this framework keeps (in my case) dozens and dozens of idle threads around even during long periods of low or no traffic. Maybe they fixed it in newer versions, idk. 

Jiqing Tang

unread,
May 12, 2023, 7:03:58 PM5/12/23
to grpc.io
Thanks so much Jeff, agree reaping them after they being idle would be great.

AJ Heller

unread,
May 16, 2023, 7:44:06 PM5/16/23
to grpc.io
Hello all, I want to offer a quick update. tl;dr: Jeff's analysis is correct. The executor is legacy code at this point, slated for deletion, and increasingly unused.

We have been carefully replacing the legacy I/O, timer, and async execution implementations with a new public EventEngine API and its default implementations. The new thread pools do still auto-scale as needed - albeit with different heuristics, which are evolving as we benchmark - but threads are now reclaimed if/when gRPC calms down from a burst of activity that caused the pool to grow. Also, I believe the executor did not rate limit thread creation when closure queues reached their max depths, but the default EventEngine implementations do rate limit thread creation (currently capped at 1 new thread per second, but that's an implementation detail which may change ... some benchmarks have shown it to be a pretty effective rate). Beginning around gRPC v.148, you should see an increasing number of "event_engine" threads, and a decreasing number of executor threads. Ultimately we aim to unify all async activity into a single auto-scaling thread pool under the EventEngine.

And since the EventEngine is a public API, any integrators that want complete control over thread behavior can implement their own EventEngine and plug it in to gRPC. gRPC will (eventually) use a provided engine for all async execution, timers, and I/O. Implementing an engine is not a small task, but it is an option people have been requesting for years. Otherwise, the default threading behavior provided by gRPC is tuned for performance - if starting a thread helps gRPC move faster, then that's what it will do.

Hope this helps!
-aj

Jeff Steger

unread,
May 17, 2023, 6:27:42 PM5/17/23
to AJ Heller, grpc.io
Hi AJ,

Thanks for the reply. May I suggest making max number of threads (and/or any rate limits) configurable. 

Jeff

Reply all
Reply to author
Forward
0 new messages