[crypto] ssh/agent: support parallel signing using request pipelining

1 view
Skip to first unread message

Nicola Murino (Gerrit)

unread,
Apr 19, 2026, 2:35:53 PM (10 days ago) Apr 19
to goph...@pubsubhelper.golang.org, golang-co...@googlegroups.com

Nicola Murino has uploaded the change for review

Commit message

ssh/agent: support parallel signing using request pipelining

Add ClientConfig and NewClientWithConfig to allow callers to opt into
pipelined request handling. When MaxInFlight >= 2 the client writes
requests to the connection as soon as the write path is available and
dispatches responses back to callers in FIFO order via a dedicated
reader goroutine, instead of fully serializing each call.

This lets an agent that load-balances signing across multiple backend
devices keep several of them busy concurrently without forcing callers
to maintain their own connection pool. Aggregate throughput scales
roughly linearly with MaxInFlight up to the number of backend devices;
the protocol's in-order response requirement means slow requests still
delay subsequent ones on the same connection, which is unchanged.

NewClient is untouched and keeps its historic fully-serialized
behavior. NewClientWithConfig takes an io.ReadWriteCloser rather than
an io.ReadWriter so the pipelined client can close the connection to
unblock the reader goroutine on unrecoverable write errors.

Fixes golang/go#78473
Change-Id: Icf14e5e8ca897506d68fb32c14fc72c774cd97b2

Change diff

diff --git a/ssh/agent/client.go b/ssh/agent/client.go
index b357e18..ccb312d 100644
--- a/ssh/agent/client.go
+++ b/ssh/agent/client.go
@@ -311,8 +311,12 @@
type client struct {
// conn is typically a *net.UnixConn
conn io.ReadWriter
- // mu is used to prevent concurrent access to the agent
+ // mu is used to prevent concurrent access to the agent when
+ // pipeline is nil (the default, non-pipelined mode).
mu sync.Mutex
+ // pipeline, when non-nil, implements pipelined request handling
+ // and is used by callRaw in place of mu.
+ pipeline *pipeline
}

// NewClient returns an Agent that talks to an ssh-agent process over
@@ -321,6 +325,51 @@
return &client{conn: rw}
}

+// ClientConfig contains optional settings for an agent client.
+type ClientConfig struct {
+ // MaxInFlight sets the maximum number of concurrent outstanding
+ // requests that may be pipelined to the agent.
+ //
+ // If MaxInFlight is less than 2, the client uses the historic
+ // fully-serialized behavior: a request is sent only after the
+ // previous response has been received.
+ //
+ // If MaxInFlight is 2 or greater, concurrent callers may issue
+ // requests that are written to the connection as soon as the write
+ // path is available, without waiting for prior responses. This can
+ // significantly improve throughput when the agent is able to
+ // process multiple operations in parallel (for example, when it is
+ // backed by multiple signing devices).
+ //
+ // The ssh-agent protocol requires the agent to return responses in
+ // request order, so a slow request still delays subsequent
+ // responses (head-of-line blocking). Pipelining improves aggregate
+ // throughput but does not eliminate this constraint.
+ //
+ // Pipelining does not currently support per-request cancellation:
+ // a hung operation on the agent side can block subsequent requests
+ // until the underlying connection is closed. If this is a concern,
+ // enforce a timeout by closing the connection from the caller.
+ MaxInFlight int
+}
+
+// NewClientWithConfig returns an ExtendedAgent that talks to an
+// ssh-agent process over the given connection, using cfg.
+//
+// The connection must be an io.ReadWriteCloser: when pipelining is
+// enabled the client may need to close it to unblock the background
+// reader goroutine on write failures. Standard network and socket
+// types (for example *net.UnixConn or *net.TCPConn) satisfy this
+// interface. Callers that only have an io.ReadWriter, or that do not
+// need pipelining, should use NewClient instead.
+func NewClientWithConfig(rwc io.ReadWriteCloser, cfg ClientConfig) ExtendedAgent {
+ c := &client{conn: rwc}
+ if cfg.MaxInFlight >= 2 {
+ c.pipeline = newPipeline(rwc, cfg.MaxInFlight)
+ }
+ return c
+}
+
// call sends an RPC to the agent. On success, the reply is
// unmarshaled into reply and replyType is set to the first byte of
// the reply, which contains the type of the message.
@@ -340,6 +389,10 @@
// bytes of the response are returned; no unmarshalling is
// performed on the response.
func (c *client) callRaw(req []byte) (reply []byte, err error) {
+ if c.pipeline != nil {
+ return c.pipeline.call(req)
+ }
+
c.mu.Lock()
defer c.mu.Unlock()

@@ -854,3 +907,156 @@

return buf, nil
}
+
+// pipelineResult carries either a raw agent reply or an error back to a
+// caller waiting on the response channel.
+type pipelineResult struct {
+ reply []byte
+ err error
+}
+
+// pipeline implements request pipelining over a single agent connection.
+//
+// Writers serialize on writeMu to both register a reply channel in the
+// pending FIFO queue and write the request bytes on the wire; the two
+// must be atomic so the queue order matches the wire order. A single
+// reader goroutine decodes responses from the connection and dispatches
+// each one to the channel at the head of the queue.
+//
+// When the reader goroutine exits (on read error or protocol violation),
+// it closes exitCh to wake any writer blocked on the pending queue, then
+// serializes with any in-flight writer to close the pending channel, and
+// finally drains the remaining entries delivering the terminal error to
+// each waiting caller.
+type pipeline struct {
+ conn io.ReadWriteCloser
+
+ writeMu sync.Mutex
+ pending chan chan pipelineResult
+ exitCh chan struct{}
+
+ // errMu protects err. It is set at most once, before exitCh is
+ // closed, by the reader goroutine.
+ errMu sync.Mutex
+ err error
+}
+
+func newPipeline(conn io.ReadWriteCloser, maxInFlight int) *pipeline {
+ p := &pipeline{
+ conn: conn,
+ pending: make(chan chan pipelineResult, maxInFlight),
+ exitCh: make(chan struct{}),
+ }
+ go p.readLoop()
+ return p
+}
+
+// readLoop decodes responses from conn and dispatches them in FIFO order
+// to reply channels in pending. On any failure it invokes shutdown.
+func (p *pipeline) readLoop() {
+ var finalErr error
+ for {
+ var sizeBuf [4]byte
+ if _, err := io.ReadFull(p.conn, sizeBuf[:]); err != nil {
+ finalErr = err
+ break
+ }
+ respSize := binary.BigEndian.Uint32(sizeBuf[:])
+ if respSize > maxAgentResponseBytes {
+ finalErr = errors.New("response too large")
+ break
+ }
+ buf := make([]byte, respSize)
+ if _, err := io.ReadFull(p.conn, buf); err != nil {
+ finalErr = err
+ break
+ }
+ // Successful writes always enqueue before sending bytes, so
+ // pending has a waiting channel for this response.
+ ch := <-p.pending
+ // The reply channel is buffered with capacity 1 and is only
+ // ever written to once, so this send cannot block.
+ ch <- pipelineResult{reply: buf}
+ }
+ p.shutdown(clientErr(finalErr))
+}
+
+// shutdown is called exactly once, from readLoop, when the reader is
+// terminating. It unblocks pending writers and fails all in-flight
+// requests with finalErr.
+func (p *pipeline) shutdown(finalErr error) {
+ p.errMu.Lock()
+ p.err = finalErr
+ p.errMu.Unlock()
+
+ // Wake any writer blocked waiting for a slot in the pending queue.
+ close(p.exitCh)
+
+ // Wait for any writer currently inside its critical section to
+ // complete. After this lock, no new writer can reach the send on
+ // pending: they will observe exitCh closed in the select and bail
+ // out before attempting the send.
+ p.writeMu.Lock()
+ close(p.pending)
+ p.writeMu.Unlock()
+
+ // Drain entries that were enqueued but never answered, delivering
+ // the terminal error to their waiting callers. The reply channels
+ // are buffered (cap 1) and written to exactly once, so these sends
+ // cannot block.
+ for ch := range p.pending {
+ ch <- pipelineResult{err: finalErr}
+ }
+}
+
+// call sends req to the agent and returns the matching raw response.
+func (p *pipeline) call(req []byte) ([]byte, error) {
+ replyCh := make(chan pipelineResult, 1)
+
+ p.writeMu.Lock()
+
+ // Priority check: if the reader has already finished shutdown,
+ // pending is closed and sending to it would panic. Bail out now.
+ // Once we pass this check while holding writeMu, shutdown cannot
+ // complete close(pending) until we release writeMu, so the send
+ // below is safe against concurrent closure.
+ select {
+ case <-p.exitCh:
+ p.writeMu.Unlock()
+ p.errMu.Lock()
+ err := p.err
+ p.errMu.Unlock()
+ return nil, err
+ default:
+ }
+
+ // Enqueue the reply channel before writing the request, so FIFO
+ // order on the wire matches FIFO order in the pending queue. The
+ // exitCh arm handles the case where the reader errors while we
+ // block on a full queue.
+ select {
+ case p.pending <- replyCh:
+ case <-p.exitCh:
+ p.writeMu.Unlock()
+ p.errMu.Lock()
+ err := p.err
+ p.errMu.Unlock()
+ return nil, err
+ }
+
+ msg := make([]byte, 4+len(req))
+ binary.BigEndian.PutUint32(msg, uint32(len(req)))
+ copy(msg[4:], req)
+ _, werr := p.conn.Write(msg)
+ p.writeMu.Unlock()
+
+ if werr != nil {
+ // The connection is in an undefined state. Close it so the
+ // reader unblocks promptly and the drain path delivers our
+ // terminal error on replyCh.
+ p.conn.Close()
+ }
+
+ res := <-replyCh
+ return res.reply, res.err
+}
diff --git a/ssh/agent/client_pipeline_test.go b/ssh/agent/client_pipeline_test.go
new file mode 100644
index 0000000..76782ec
--- /dev/null
+++ b/ssh/agent/client_pipeline_test.go
@@ -0,0 +1,448 @@
+// Copyright 2026 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package agent
+
+import (
+ "crypto/rand"
+ "crypto/rsa"
+ "encoding/binary"
+ "errors"
+ "fmt"
+ "io"
+ "net"
+ "sync"
+ "sync/atomic"
+ "testing"
+ "time"
+
+ "golang.org/x/crypto/ssh"
+)
+
+func TestPipelineParallelSigns(t *testing.T) {
+ c1, c2, err := netPipe()
+ if err != nil {
+ t.Fatalf("netPipe: %v", err)
+ }
+ defer c1.Close()
+ defer c2.Close()
+
+ key, err := rsa.GenerateKey(rand.Reader, 2048)
+ if err != nil {
+ t.Fatal(err)
+ }
+ keyring := NewKeyring()
+ if err := keyring.Add(AddedKey{PrivateKey: key}); err != nil {
+ t.Fatal(err)
+ }
+ go ServeAgent(keyring, c2)
+
+ client := NewClientWithConfig(c1, ClientConfig{MaxInFlight: 16})
+
+ signer, err := ssh.NewSignerFromKey(key)
+ if err != nil {
+ t.Fatal(err)
+ }
+ pubKey := signer.PublicKey()
+
+ const N = 64
+ var wg sync.WaitGroup
+ errCh := make(chan error, N)
+ for i := range N {
+ wg.Go(func() {
+ data := []byte{byte(i), byte(i >> 8)}
+ sig, err := client.Sign(pubKey, data)
+ if err != nil {
+ errCh <- err
+ return
+ }
+ if err := pubKey.Verify(data, sig); err != nil {
+ errCh <- err
+ }
+ })
+ }
+ wg.Wait()
+ close(errCh)
+ for err := range errCh {
+ t.Error(err)
+ }
+}
+
+func TestPipelineBackpressure(t *testing.T) {
+ c1, c2, err := netPipe()
+ if err != nil {
+ t.Fatalf("netPipe: %v", err)
+ }
+
+ const maxInFlight = 3
+ const total = 7
+
+ // Fake agent: the reader drains requests from the wire independently
+ // of response production, so the write path is never self-blocked.
+ // The responder only writes a canned success reply once unblocked
+ // via gate.
+ var reqsRead int64
+ gate := make(chan struct{}, total)
+ pending := make(chan struct{}, total+maxInFlight)
+ readerDone := make(chan struct{})
+ responderDone := make(chan struct{})
+ go func() {
+ defer close(readerDone)
+ defer close(pending)
+ for {
+ var sizeBuf [4]byte
+ if _, err := io.ReadFull(c2, sizeBuf[:]); err != nil {
+ return
+ }
+ size := binary.BigEndian.Uint32(sizeBuf[:])
+ req := make([]byte, size)
+ if _, err := io.ReadFull(c2, req); err != nil {
+ return
+ }
+ atomic.AddInt64(&reqsRead, 1)
+ pending <- struct{}{}
+ }
+ }()
+ go func() {
+ defer close(responderDone)
+ for range pending {
+ <-gate
+ var out [5]byte
+ binary.BigEndian.PutUint32(out[:4], 1)
+ out[4] = agentSuccess
+ if _, err := c2.Write(out[:]); err != nil {
+ return
+ }
+ }
+ }()
+ defer func() {
+ c1.Close()
+ c2.Close()
+ <-readerDone
+ <-responderDone
+ }()
+
+ client := NewClientWithConfig(c1, ClientConfig{MaxInFlight: maxInFlight})
+
+ var wg sync.WaitGroup
+ errs := make(chan error, total)
+ for range total {
+ wg.Go(func() {
+ // RemoveAll maps to a simpleCall expecting agentSuccess.
+ if err := client.RemoveAll(); err != nil {
+ errs <- err
+ }
+ })
+ }
+
+ // Wait for the client to fill the pipeline.
+ deadline := time.Now().Add(2 * time.Second)
+ for time.Now().Before(deadline) {
+ if atomic.LoadInt64(&reqsRead) >= maxInFlight {
+ break
+ }
+ time.Sleep(5 * time.Millisecond)
+ }
+ if got := atomic.LoadInt64(&reqsRead); got != maxInFlight {
+ t.Errorf("requests in flight before any response: got %d, want %d", got, maxInFlight)
+ }
+
+ // Release all responses; every caller should eventually complete.
+ for range total {
+ gate <- struct{}{}
+ }
+ wg.Wait()
+ close(errs)
+ for err := range errs {
+ t.Error(err)
+ }
+
+ if got := atomic.LoadInt64(&reqsRead); got != total {
+ t.Errorf("total requests received: got %d, want %d", got, total)
+ }
+}
+
+func TestPipelineConnCloseFailsInFlight(t *testing.T) {
+ c1, c2, err := netPipe()
+ if err != nil {
+ t.Fatalf("netPipe: %v", err)
+ }
+
+ // Server that reads requests but never replies, so all calls are
+ // parked waiting for responses.
+ serverExit := make(chan struct{})
+ go func() {
+ defer close(serverExit)
+ buf := make([]byte, 4096)
+ for {
+ if _, err := c2.Read(buf); err != nil {
+ return
+ }
+ }
+ }()
+
+ client := NewClientWithConfig(c1, ClientConfig{MaxInFlight: 4})
+
+ const N = 8
+ var wg sync.WaitGroup
+ errs := make(chan error, N)
+ for range N {
+ wg.Go(func() {
+ errs <- client.RemoveAll()
+ })
+ }
+
+ // Give callers time to enter the pipeline, then tear down the conn.
+ time.Sleep(50 * time.Millisecond)
+ c1.Close()
+ c2.Close()
+
+ // All callers must return promptly with an error.
+ doneCh := make(chan struct{})
+ go func() {
+ wg.Wait()
+ close(doneCh)
+ }()
+ select {
+ case <-doneCh:
+ case <-time.After(5 * time.Second):
+ t.Fatal("pipelined callers did not complete after conn close")
+ }
+
+ close(errs)
+ for err := range errs {
+ if err == nil {
+ t.Error("expected error after conn close, got nil")
+ }
+ }
+
+ // Subsequent calls also fail fast rather than hanging.
+ errCh := make(chan error, 1)
+ go func() { errCh <- client.RemoveAll() }()
+ select {
+ case err := <-errCh:
+ if err == nil {
+ t.Error("expected error on post-close call, got nil")
+ }
+ case <-time.After(2 * time.Second):
+ t.Fatal("post-close call hung")
+ }
+
+ <-serverExit
+}
+
+func TestPipelineDisabledWhenMaxInFlightLow(t *testing.T) {
+ c1, c2, err := netPipe()
+ if err != nil {
+ t.Fatalf("netPipe: %v", err)
+ }
+ defer c1.Close()
+ defer c2.Close()
+
+ for _, n := range []int{0, 1} {
+ got := NewClientWithConfig(c1, ClientConfig{MaxInFlight: n}).(*client)
+ if got.pipeline != nil {
+ t.Errorf("MaxInFlight=%d: pipeline enabled, expected nil", n)
+ }
+ }
+ got := NewClientWithConfig(c1, ClientConfig{MaxInFlight: 2}).(*client)
+ if got.pipeline == nil {
+ t.Error("MaxInFlight=2: pipeline not enabled")
+ }
+}
+
+// writeFailConn wraps a net.Conn and forces Write to return an error when
+// the fail flag is set. Close delegates to the underlying conn so the test
+// can observe whether the pipeline closed it.
+type writeFailConn struct {
+ net.Conn
+ fail atomic.Bool
+}
+
+func (c *writeFailConn) Write(p []byte) (int, error) {
+ if c.fail.Load() {
+ return 0, errors.New("synthetic write failure")
+ }
+ return c.Conn.Write(p)
+}
+
+func TestPipelineWriteErrorClosesConn(t *testing.T) {
+ c1, c2, err := netPipe()
+ if err != nil {
+ t.Fatalf("netPipe: %v", err)
+ }
+ defer c2.Close()
+
+ // Peer that reads and discards, exiting when the conn closes.
+ serverExit := make(chan struct{})
+ go func() {
+ defer close(serverExit)
+ io.Copy(io.Discard, c2)
+ }()
+
+ wrapper := &writeFailConn{Conn: c1}
+ wrapper.fail.Store(true)
+
+ client := NewClientWithConfig(wrapper, ClientConfig{MaxInFlight: 4})
+
+ errCh := make(chan error, 1)
+ go func() { errCh <- client.RemoveAll() }()
+
+ select {
+ case err := <-errCh:
+ if err == nil {
+ t.Error("expected error on forced write failure, got nil")
+ }
+ case <-time.After(2 * time.Second):
+ t.Fatal("RemoveAll hung after write failure")
+ }
+
+ // The pipeline must have closed the underlying conn, so the peer
+ // goroutine returns from its Read.
+ select {
+ case <-serverExit:
+ case <-time.After(2 * time.Second):
+ t.Fatal("connection was not closed after write failure")
+ }
+
+ // After shutdown, subsequent calls must also fail promptly.
+ errCh2 := make(chan error, 1)
+ go func() { errCh2 <- client.RemoveAll() }()
+ select {
+ case err := <-errCh2:
+ if err == nil {
+ t.Error("expected error on post-shutdown call, got nil")
+ }
+ case <-time.After(2 * time.Second):
+ t.Fatal("post-shutdown call hung")
+ }
+}
+
+// multiDeviceAgent simulates an ssh-agent backed by numDevices signing
+// devices, each of which takes latency to process a request. Incoming
+// requests are round-robin dispatched to devices, and responses are
+// serialized back to the wire in request order as the protocol requires.
+//
+// The simulation does not emit a semantically meaningful reply; it just
+// writes a 1-byte SSH_AGENT_SUCCESS after the simulated latency has
+// elapsed. This is enough to exercise callRaw/simpleCall (RemoveAll).
+func multiDeviceAgent(t testing.TB, conn io.ReadWriteCloser, numDevices int, latency time.Duration) (done <-chan struct{}) {
+ t.Helper()
+
+ type job struct {
+ ready chan struct{}
+ }
+
+ devices := make([]chan job, numDevices)
+ var devicesWG sync.WaitGroup
+ for i := range numDevices {
+ devices[i] = make(chan job, 1024)
+ ch := devices[i]
+ devicesWG.Go(func() {
+ for j := range ch {
+ time.Sleep(latency)
+ close(j.ready)
+ }
+ })
+ }
+
+ // Order-preserving responder: responses are written in the same
+ // order the requests were read, regardless of which device finished
+ // first.
+ orderCh := make(chan chan struct{}, 1024)
+ responderDone := make(chan struct{})
+ go func() {
+ defer close(responderDone)
+ for ready := range orderCh {
+ <-ready
+ var out [5]byte
+ binary.BigEndian.PutUint32(out[:4], 1)
+ out[4] = agentSuccess
+ if _, err := conn.Write(out[:]); err != nil {
+ return
+ }
+ }
+ }()
+
+ finished := make(chan struct{})
+ go func() {
+ defer close(finished)
+ defer func() {
+ close(orderCh)
+ <-responderDone
+ for _, d := range devices {
+ close(d)
+ }
+ devicesWG.Wait()
+ }()
+ next := 0
+ for {
+ var sizeBuf [4]byte
+ if _, err := io.ReadFull(conn, sizeBuf[:]); err != nil {
+ return
+ }
+ size := binary.BigEndian.Uint32(sizeBuf[:])
+ if _, err := io.CopyN(io.Discard, conn, int64(size)); err != nil {
+ return
+ }
+ ready := make(chan struct{})
+ devices[next%numDevices] <- job{ready: ready}
+ next++
+ orderCh <- ready
+ }
+ }()
+
+ return finished
+}
+
+// BenchmarkPipelineMultiDevice compares throughput of the agent client at
+// different MaxInFlight settings against a simulated multi-device agent.
+//
+// Setup: 4 simulated signing devices, each taking 2ms per request. The
+// protocol serializes responses in request order, so with MaxInFlight=1
+// the client must wait for each response before sending the next and the
+// devices never run in parallel. Larger MaxInFlight lets the client keep
+// multiple devices busy concurrently; throughput should plateau once
+// MaxInFlight >= numDevices (minus some head-of-line overhead).
+//
+// Run with: go test -benchmem -bench BenchmarkPipelineMultiDevice -benchtime=2s ./ssh/agent/
+func BenchmarkPipelineMultiDevice(b *testing.B) {
+ const (
+ numDevices = 4
+ deviceLatency = 2 * time.Millisecond
+ parallelism = 32
+ )
+
+ for _, maxInFlight := range []int{1, 2, 4, 8, 16} {
+ b.Run(fmt.Sprintf("MaxInFlight=%d", maxInFlight), func(b *testing.B) {
+ c1, c2, err := netPipe()
+ if err != nil {
+ b.Fatalf("netPipe: %v", err)
+ }
+ serverDone := multiDeviceAgent(b, c2, numDevices, deviceLatency)
+
+ var client ExtendedAgent
+ if maxInFlight <= 1 {
+ client = NewClient(c1)
+ } else {
+ client = NewClientWithConfig(c1, ClientConfig{MaxInFlight: maxInFlight})
+ }
+
+ b.ResetTimer()
+ b.SetParallelism(parallelism)
+ b.RunParallel(func(pb *testing.PB) {
+ for pb.Next() {
+ if err := client.RemoveAll(); err != nil {
+ b.Fatal(err)
+ }
+ }
+ })
+ b.StopTimer()
+
+ c1.Close()
+ c2.Close()
+ <-serverDone
+ })
+ }
+}

Change information

Files:
  • M ssh/agent/client.go
  • A ssh/agent/client_pipeline_test.go
Change size: L
Delta: 2 files changed, 655 insertions(+), 1 deletion(-)
Open in Gerrit

Related details

Attention set is empty
Submit Requirements:
  • requirement is not satisfiedCode-Review
  • requirement satisfiedNo-Unresolved-Comments
  • requirement is not satisfiedReview-Enforcement
  • requirement is not satisfiedTryBots-Pass
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. DiffyGerrit
Gerrit-MessageType: newchange
Gerrit-Project: crypto
Gerrit-Branch: master
Gerrit-Change-Id: Icf14e5e8ca897506d68fb32c14fc72c774cd97b2
Gerrit-Change-Number: 768483
Gerrit-PatchSet: 1
Gerrit-Owner: Nicola Murino <nicola...@gmail.com>
unsatisfied_requirement
satisfied_requirement
open
diffy

Nicola Murino (Gerrit)

unread,
Apr 19, 2026, 2:36:27 PM (10 days ago) Apr 19
to goph...@pubsubhelper.golang.org, golang-co...@googlegroups.com

Nicola Murino voted Commit-Queue+1

Commit-Queue+1
Open in Gerrit

Related details

Attention set is empty
Submit Requirements:
  • requirement is not satisfiedCode-Review
  • requirement satisfiedNo-Unresolved-Comments
  • requirement is not satisfiedReview-Enforcement
  • requirement is not satisfiedTryBots-Pass
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. DiffyGerrit
Gerrit-MessageType: comment
Gerrit-Project: crypto
Gerrit-Branch: master
Gerrit-Change-Id: Icf14e5e8ca897506d68fb32c14fc72c774cd97b2
Gerrit-Change-Number: 768483
Gerrit-PatchSet: 1
Gerrit-Owner: Nicola Murino <nicola...@gmail.com>
Gerrit-Reviewer: Nicola Murino <nicola...@gmail.com>
Gerrit-Comment-Date: Sun, 19 Apr 2026 18:36:21 +0000
Gerrit-HasComments: No
Gerrit-Has-Labels: Yes
unsatisfied_requirement
satisfied_requirement
open
diffy

Nicola Murino (Gerrit)

unread,
5:03 AM (3 hours ago) 5:03 AM
to goph...@pubsubhelper.golang.org, golang-co...@googlegroups.com

Nicola Murino uploaded new patchset

Nicola Murino uploaded patch set #2 to this change.
Following approvals got outdated and were removed:
Open in Gerrit

Related details

Attention set is empty
Submit Requirements:
  • requirement is not satisfiedCode-Review
  • requirement satisfiedNo-Unresolved-Comments
  • requirement is not satisfiedReview-Enforcement
  • requirement is not satisfiedTryBots-Pass
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. DiffyGerrit
Gerrit-MessageType: newpatchset
Gerrit-Project: crypto
Gerrit-Branch: master
Gerrit-Change-Id: Icf14e5e8ca897506d68fb32c14fc72c774cd97b2
Gerrit-Change-Number: 768483
Gerrit-PatchSet: 2
Gerrit-Owner: Nicola Murino <nicola...@gmail.com>
Gerrit-Reviewer: Nicola Murino <nicola...@gmail.com>
Gerrit-CC: Filippo Valsorda <fil...@golang.org>
Gerrit-CC: Gopher Robot <go...@golang.org>
Gerrit-CC: Roland Shoemaker <rol...@golang.org>
unsatisfied_requirement
satisfied_requirement
open
diffy

Filippo Valsorda (Gerrit)

unread,
6:22 AM (2 hours ago) 6:22 AM
to Nicola Murino, goph...@pubsubhelper.golang.org, Roland Shoemaker, Filippo Valsorda, Gopher Robot, golang...@luci-project-accounts.iam.gserviceaccount.com, golang-co...@googlegroups.com
Attention needed from Nicola Murino

Filippo Valsorda added 5 comments

Commit Message
File-level comment, Patchset 2 (Parent):
Filippo Valsorda . unresolved

Update the commit message.

File ssh/agent/client.go
Line 325, Patchset 2 (Latest):// the wire is available, rather than fully serialized. The ssh-agent
Filippo Valsorda . unresolved

```suggestion
// the wire is available, rather than waiting for the previous responses. The ssh-agent
```

Line 329, Patchset 2 (Latest):func NewClient(rw io.ReadWriter) ExtendedAgent {
Filippo Valsorda . unresolved

Document that rw needs to implement Closer to unblock the Reader goroutine in case of Writer errors.

Line 868, Patchset 2 (Latest): pending chan chan pipelineResult
Filippo Valsorda . unresolved

Document that the outer chan is a FIFO of size pipelineMaxInFlight, and the inner one is buffered of size one.

Line 871, Patchset 2 (Latest): // errMu protects err. It is set at most once, before exitCh is

// closed, by the reader goroutine.
errMu sync.Mutex
err error
Filippo Valsorda . unresolved

atomic.Pointer[error] since the lock is always held just to read/write the error.

Open in Gerrit

Related details

Attention is currently required from:
  • Nicola Murino
Submit Requirements:
    • requirement is not satisfiedCode-Review
    • requirement is not satisfiedNo-Unresolved-Comments
    • requirement is not satisfiedReview-Enforcement
    • requirement is not satisfiedTryBots-Pass
    Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. DiffyGerrit
    Gerrit-MessageType: comment
    Gerrit-Project: crypto
    Gerrit-Branch: master
    Gerrit-Change-Id: Icf14e5e8ca897506d68fb32c14fc72c774cd97b2
    Gerrit-Change-Number: 768483
    Gerrit-PatchSet: 2
    Gerrit-Owner: Nicola Murino <nicola...@gmail.com>
    Gerrit-Reviewer: Nicola Murino <nicola...@gmail.com>
    Gerrit-CC: Filippo Valsorda <fil...@golang.org>
    Gerrit-CC: Gopher Robot <go...@golang.org>
    Gerrit-CC: Roland Shoemaker <rol...@golang.org>
    Gerrit-Attention: Nicola Murino <nicola...@gmail.com>
    Gerrit-Comment-Date: Wed, 29 Apr 2026 10:22:26 +0000
    Gerrit-HasComments: Yes
    Gerrit-Has-Labels: No
    unsatisfied_requirement
    open
    diffy
    Reply all
    Reply to author
    Forward
    0 new messages