Buffered channels: “good” vs “bad” waiting — any official docs?

247 views
Skip to first unread message

Egor Ponomarev

unread,
Sep 1, 2025, 12:00:45 PM (4 days ago) Sep 1
to golang-nuts

We’re using a typical producer-consumer pattern: goroutines send messages to a channel, and a worker processes them. A colleague asked me why we even bother with a buffered channel (say, size 1000) if we’re waiting for the result anyway.

I tried to explain it like this: there are two kinds of waiting.

“Bad” waiting – when a goroutine is blocked trying to send to a full channel:
requestChan <- req // goroutine just hangs here, blocking the system

“Good” waiting – when the send succeeds quickly, and you wait for the result afterwards:
requestChan <- req // quickly enqueued
result := <-resultChan // wait for result without holding up others

The point: a big buffer lets goroutines hand off tasks fast and free themselves for new work. Under burst load, this is crucial — it lets the system absorb spikes without slowing everything down.

But here’s the twist: my colleague tested it with 2000 goroutines and got roughly the same processing time. His argument: “waiting to enqueue or dequeue seems to perform the same no matter how many goroutines are waiting.”

So my question is: does Go have any official docs that describe this idea? Effective Go shows semaphores, but it doesn’t really spell out this difference in blocking types.

Am I misunderstanding something, or is this just one of those “implicit Go concurrency truths” that everyone sort of knows but isn’t officially documented?

robert engels

unread,
Sep 1, 2025, 12:13:32 PM (4 days ago) Sep 1
to Egor Ponomarev, golang-nuts
There is not enough info to give a full recommendation but I suspect you are misunderstanding how it works.

The buffered channels allow the producers to continue while waiting for the consumer to finish.

If the producer can’t continue until the consumer runs and provides a value via a callback or other channel, then yes the buffered channel might not seem to provide any value - expect that in a highly concurrent environment go routines are usually not in a pure ‘reading the channel’ mode - they are finishing up a previous request - so the buffering allows some level of additional concurrency in the state.

When requests are extremely short in duration this can matter a lot.

Usually though, a better solution is to simply have N+1 consumers for N producers and use a handoff channel (unbuffered) - but if the workload is CPU bound you will expend extra resources context switching (ie. thrashing) - because these Go routines will be timesliced.

Better to cap the consumers and use a buffered channel.



--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/golang-nuts/b4194b6b-51ea-42ff-af34-b7aa6093c15fn%40googlegroups.com.

Jason E. Aten

unread,
Sep 1, 2025, 3:04:48 PM (4 days ago) Sep 1
to golang-nuts
Hi Egor,

To add to what Robert advises -- there is no one-size-fits-all 
guidance that covers all situations. You have to understand the 
principles of operation and reason/measure from there. There are
heuristics, but even then exceptions to the rules of thumb abound.

As Robert said, in general the buffered channel will give you
more opportunity for parallelism, and might move your bottleneck
forward or back in the processing pipeline. 

You could try to study the location of your bottleneck, and tracing
there (but I've not used it myself--I would just start with a
basic CPU profile and see if there are hot spots).

An old design heuristic in Go was to always start
with unbuffered channels. Then add buffering to tune
performance. 

However there are plenty of times when I
allocate a channel with a buffer of size 1 so that I know
my initial sender can queue an initial value without itself
blocking. 

Sometimes, for flow-control, I never want to
buffer a channel--in particular when going network <-> channel,
because I want the local back-pressure to propagate
through TCP/QUIC to the result in back-pressure on the
remote side, and if I buffer then in effect I'm asking for work I cannot
yet handle. 

If I'm using a channel as a test event history, then I typically
give it a massive buffer, and even then also wrap it in a function
that will panic if the channel reaches cap() capacity; because
I never really want my tests to be blocked on creating
a test execution event-specific "trace" that I'm going to
assert over in the test.  So in that case I always want big buffers.

As above, exceptions to most heuristics are common.

In your particular example, I suspect your colleague is right
and you are not gaining anything from channel buffering--of course
it is impossible to know for sure without the system in front
of you to measure.

Lastly, you likely already realize this, but the request+response
wait pattern you cited typically needs both request and waiting
for the response to be wrapped in selects with a "bail-out" or shutdown channel:

jobTicket := makeJobTicketWithDoneChannel()
select {
  case sendRequestToDoJobChan <- jobTicket:
  case <-bailoutOnShutDownChan: // or context.Done, etc
      // exit/cleanup here
}
select {
  case <-jobTicket.Done:
  case <-bailoutOnShutDownChan:
    // exit/cleanup here
}
in order to enable graceful stopping/shutdown of goroutines.
Message has been deleted

Egor Ponomarev

unread,
Sep 2, 2025, 11:33:05 AM (3 days ago) Sep 2
to golang-nuts

Hi Robert, Jason,

Thank you both for your detailed and thoughtful responses — they helped me see the problem more clearly. Let me share some more details about our specific case:

  • We have exactly one consumer (worker), and we can’t add more because the underlying resource can only be accessed by one process at a time (think of it as exclusive access to a single connection).

  • The worker operation is a TCP connection, which is usually fast, but the network can occasionally be unreliable and introduce delays.

  • We may have lots of producers, and each producer waits for a result after submitting a request.

Given these constraints, can an unbuffered channel have any advantage over a buffered one for our case? 
My understanding is that producers will just end up blocking when the single worker can’t keep up — so whether the blocking happens at “enqueue time” (unbuffered channel) or later (buffered channel).

What’s your view — is there any benefit in using an unbuffered/buffered channel in this situation?

Thanks again for the guidance!


понедельник, 1 сентября 2025 г. в 14:04:48 UTC-5, Jason E. Aten:

Jason E. Aten

unread,
Sep 2, 2025, 11:46:49 AM (3 days ago) Sep 2
to golang-nuts
Yes, but not in terms of performance. Using a buffered
channel could provide more "fairness" in the sense of "first-come, first served".

If you depend on the (pseudo-randomized) select to decide on which
producer's job gets serviced next, you could increase your response
latency by arbitrarily delaying an early job for.a long time, while a
late arriving job can "jump the queue" and get serviced immediately.

The buffered channel will preserve some of the arrival order. But--this
is only up to its length--after that the late arrivers will still be randomly
serviced--due to the pseudo random select mechanism. So if you
demand true FIFO for all jobs, then you might well be better served
by using a mutex and and a slice to keep your jobs anyway--such
that the limit on your queue is available memory rather than a
fixed channel buffer size.

Of course if you over-run all of your memory, you are in trouble 
again. As usual, back-pressure is a really critical component
of most designs. You usually want it.

robert engels

unread,
Sep 2, 2025, 11:49:51 AM (3 days ago) Sep 2
to Egor P, golang-nuts
In this case, it won’t matter performance wise, but two unbuffered channels - request and response - probably simplifies things.

On Sep 2, 2025, at 04:47, Egor P <guide.no...@gmail.com> wrote:

Hi Robert, Jason,

Thank you both for your detailed and thoughtful responses — they helped me see the problem more clearly. Let me share some more details about our specific case:

  • We have exactly one consumer (worker), and we can’t add more because the underlying resource can only be accessed by one process at a time (think of it as exclusive access to a single connection).

  • The worker operation is a TCP connection, which is usually fast, but the network can occasionally be unreliable and introduce delays.

  • We may have lots of producers, and each producer waits for a result after submitting a request.

Given these constraints, can an unbuffered channel have any advantage over a buffered one for our case? 
My understanding is that producers will just end up blocking when the single worker can’t keep up — so whether the blocking happens at “enqueue time” (unbuffered channel) or later (buffered channel).

What’s your view — is there any benefit in using an unbuffered/buffered channel in this situation?


Thanks again for the guidance!

понедельник, 1 сентября 2025 г. в 14:04:48 UTC-5, Jason E. Aten: 
Hi Egor,

robert engels

unread,
Sep 2, 2025, 11:59:33 AM (3 days ago) Sep 2
to Jason E. Aten, golang-nuts
I don’t think this is correct. There is only a single select on the consumer side - the order of sends by the producers is already random based on go routine wakeup/scheduling.

David Finkel

unread,
Sep 2, 2025, 2:09:02 PM (3 days ago) Sep 2
to robert engels, Jason E. Aten, golang-nuts
On Tue, Sep 2, 2025 at 11:59 AM robert engels <ren...@ix.netcom.com> wrote:
I don’t think this is correct. There is only a single select on the consumer side - the order of sends by the producers is already random based on go routine wakeup/scheduling.

On Sep 2, 2025, at 10:46, Jason E. Aten <j.e....@gmail.com> wrote:

Yes, but not in terms of performance. Using a buffered
channel could provide more "fairness" in the sense of "first-come, first served".

If you depend on the (pseudo-randomized) select to decide on which
producer's job gets serviced next, you could increase your response
latency by arbitrarily delaying an early job for.a long time, while a
late arriving job can "jump the queue" and get serviced immediately.

The buffered channel will preserve some of the arrival order. But--this
is only up to its length--after that the late arrivers will still be randomly
serviced--due to the pseudo random select mechanism. So if you
demand true FIFO for all jobs, then you might well be better served
by using a mutex and and a slice to keep your jobs anyway--such
that the limit on your queue is available memory rather than a
fixed channel buffer size.
I don't think channel receive order is random when the senders are blocked.
Sending goroutines are queued in a linked list in FIFO order within the runtime's channel struct (hchan)
(different cases of a select are selected at random for fairness, though) 
 
I would recommend using a buffered channel with size 1 for any response-channel so the worker-goroutine
doesn't have to wait for another goroutine to schedule in order to pick up the next work in the queue.

Of course if you over-run all of your memory, you are in trouble 
again. As usual, back-pressure is a really critical component
of most designs. You usually want it.

Back-pressure is definitely helpful in cases like this.

robert engels

unread,
Sep 2, 2025, 2:28:01 PM (3 days ago) Sep 2
to David Finkel, Jason E. Aten, golang-nuts
Yes, but without external synchronization you have no ordering on the senders - so which actually blocks waiting for the receiver is random, further writers are blocked and are added to the list, but there is no ordering within them.

As the OP described, the writer needs to wait for a response, so the “goroutine doesn't have to wait for another goroutine to schedule in order to pick up the next work in the queue.” doesn’t apply.

So having unbuffered on both the request and response channels when you only have a single worker simplifies things - but if the requestor doesn’t read the response (or none is provided) you will have a deadlock.

Jason E. Aten

unread,
Sep 2, 2025, 2:35:55 PM (3 days ago) Sep 2
to golang-nuts
On Tuesday, September 2, 2025 at 7:09:02 PM UTC+1 David Finkel wrote:
I don't think channel receive order is random when the senders are blocked.
Sending goroutines are queued in a linked list in FIFO order within the runtime's channel struct (hchan)

Interesting--thanks for pointing this out, David.

I think this is an implementation detail rather than a promise made by the language spec, no? 
i.e. something that is subject to change since the specification does not guarantee it.

In other words: beginners, you should not depend on it. Arguably since its not in the spec, it
ought to be randomized to prevent assuming it will always hold... since it could change in the future.

For replies I use the ticket + close-a-done-channel pattern, rather than single reply channel. Thus the
worker never has to block itself once it is done with the work, and multiple parties can
observe that the job has finished if they have a pointer to it. See for example https://go.dev/play/p/ggviji5FfYz

David Finkel

unread,
Sep 2, 2025, 2:37:26 PM (3 days ago) Sep 2
to robert engels, Jason E. Aten, golang-nuts
On Tue, Sep 2, 2025 at 2:27 PM robert engels <ren...@ix.netcom.com> wrote:
Yes, but without external synchronization you have no ordering on the senders - so which actually blocks waiting for the receiver is random, further writers are blocked and are added to the list, but there is no ordering within them.
If they're all initiating a send at exactly the same time, yes, there's no ordering. However, if there's any temporal separation, the ordering of sends completing will be FIFO. (it's a linked-list-based queue)

As the OP described, the writer needs to wait for a response, so the “goroutine doesn't have to wait for another goroutine to schedule in order to pick up the next work in the queue.” doesn’t apply.
I disagree, that response has to be sent somehow.
The typical way is to include a channel of capacity 1 in the "message" that's going to the worker. 

So having unbuffered on both the request and response channels when you only have a single worker simplifies things - but if the requestor doesn’t read the response (or none is provided) you will have a deadlock.
Right, that's another reason why it's a bad idea to have the response channel unbuffered. (beyond unbuffered writes introducing goroutine scheduling dependencies).

Jason E. Aten

unread,
Sep 2, 2025, 2:40:11 PM (3 days ago) Sep 2
to golang-nuts
David wrote:
> The typical way is to include a channel of capacity 1 in the "message" that's going to the worker. 

I like this idea. But you have to guarantee only one observer will ever get to check once
if the job is done.

David Finkel

unread,
Sep 2, 2025, 2:49:21 PM (3 days ago) Sep 2
to Jason E. Aten, golang-nuts
On Tue, Sep 2, 2025 at 2:36 PM Jason E. Aten <j.e....@gmail.com> wrote:
On Tuesday, September 2, 2025 at 7:09:02 PM UTC+1 David Finkel wrote:
I don't think channel receive order is random when the senders are blocked.
Sending goroutines are queued in a linked list in FIFO order within the runtime's channel struct (hchan)

Interesting--thanks for pointing this out, David.

I think this is an implementation detail rather than a promise made by the language spec, no? 
i.e. something that is subject to change since the specification does not guarantee it.
Good point.
I just rechecked the language spec to be sure, and yeah, that's not guaranteed.

In other words: beginners, you should not depend on it. Arguably since its not in the spec, it
ought to be randomized to prevent assuming it will always hold... since it could change in the future.
Good advice. 

For replies I use the ticket + close-a-done-channel pattern, rather than single reply channel. Thus the
worker never has to block itself once it is done with the work, and multiple parties can
observe that the job has finished if they have a pointer to it. See for example https://go.dev/play/p/ggviji5FfYz
That's definitely a useful strategy in cases where you might have multiple observers. 

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.

Jason E. Aten

unread,
Sep 2, 2025, 2:49:30 PM (3 days ago) Sep 2
to golang-nuts
oops, forgot to close the done channel! ... the worker should do that: https://go.dev/play/p/kPAchZK0kKj

David Finkel

unread,
Sep 2, 2025, 2:51:37 PM (3 days ago) Sep 2
to Jason E. Aten, golang-nuts
On Tue, Sep 2, 2025 at 2:40 PM Jason E. Aten <j.e....@gmail.com> wrote:
David wrote:
> The typical way is to include a channel of capacity 1 in the "message" that's going to the worker. 

I like this idea. But you have to guarantee only one observer will ever get to check once
if the job is done.
Yep, usually the channel is only in the message sent to the worker as a sending channel and the "whole" channel is left as a local variable (where it was created).
Reply all
Reply to author
Forward
0 new messages