Brad Fitzpatrick posted comments on http2: Respect peer's SETTINGS_MAX_HEADER_LIST_SIZE in ClientConn.
Patch Set 6: Sorry, I just found this back again and realized I never replied. I'll try to take a look at this in the next few days. If there are changes required (and there usually are), are you free to make them, or would you prefer I take this over, retaining your authorship?
To view, visit this change. To unsubscribe, visit settings.
Mike Appleby posted comments on http2: Respect peer's SETTINGS_MAX_HEADER_LIST_SIZE in ClientConn.
Patch Set 6: > Sorry, I just found this back again and realized I never replied. > I'll try to take a look at this in the next few days. > > If there are changes required (and there usually are), are you free > to make them, or would you prefer I take this over, retaining your > authorship? I'm busy today, but otherwise I'm available and happy to make the changes. Of course, if you want to take ownership, that's fine too.
Brad Fitzpatrick posted comments on http2: Respect peer's SETTINGS_MAX_HEADER_LIST_SIZE in ClientConn.
Patch Set 6: (4 comments)
Patch Set #6, Line 1026: func (cc *ClientConn) PeerMaxHeaderListSize() uint32 {
Unexport this. Especially because that would be a data race. Also document that the caller must hold cc.mu.
Patch Set #6, Line 1044: func headerFieldSize(name, value string) uint64 {
Let's use the existing hpack.HeaderField.Size method instead: // Size returns the size of an entry per RFC 7541 section 4.1. func (hf HeaderField) Size() uint32 { return uint32(len(hf.Name) + len(hf.Value) + 32) } It's true that it doesn't handle multi-gigabyte strings. I'm not sure we care. Or maybe we change that method to be uint64.
Patch Set #6, Line 1048: func headerListSizeOK(hlSize uint64, peerSize uint32) bool {
I don't think you need this function. I'd just write the <= (and not the == part) at the call site. I think it'd be more clear anyway.
Patch Set #6, Line 1092: addHeader := func(h *[][2]string, name, value string) {
Have you benchmarked this before and after? I'm not willing to add allocations to this path for the header list size support. Plus I think we can implement this header list size without any new allocations.
To view, visit this change. To unsubscribe, visit settings.
Mike Appleby uploaded patch set #7 to http2: Respect peer's SETTINGS_MAX_HEADER_LIST_SIZE in ClientConn.
http2: Respect peer's SETTINGS_MAX_HEADER_LIST_SIZE in ClientConn Add a new peerMaxHeaderListSize() method to ClientConn which reports the SETTINGS_MAX_HEADER_LIST_SIZE requested by the client's peer, and respect this limit in (*ClientConn) encodeHeaders / encodeTrailers. Attempting to send more than peerMaxHeaderListSize bytes of headers or trailers will result in RoundTrip returning errRequestHeaderListSize. Updates golang/go#13959 Change-Id: Ic707179782acdf8ae543429ea1af7f4f30e67e59 --- M http2/http2.go M http2/transport.go M http2/transport_test.go 3 files changed, 389 insertions(+), 37 deletions(-)
To view, visit this change. To unsubscribe, visit settings.
Mike Appleby posted comments on http2: Respect peer's SETTINGS_MAX_HEADER_LIST_SIZE in ClientConn.
Patch Set 6: (4 comments)
Patch Set #6, Line 1026: func (cc *ClientConn) PeerMaxHeaderListSize() uint32 {
Unexport this. Especially because that would be a data race.
Done. Not sure what the convention is for avoiding name clash with struct field of the same name. I went with appending underscore to the struct field name. Let me know if you prefer something else. Another option would be to do away with the method and just let the zero value represent "unlimited" as it does on the wire. However, you would lose the ability to validate the header list size with a simple < comparison, as requested in your other comment.
Patch Set #6, Line 1044: func headerFieldSize(name, value string) uint64 {
Let's use the existing hpack.HeaderField.Size method instead:
Done
Patch Set #6, Line 1048: func headerListSizeOK(hlSize uint64, peerSize uint32) bool {
I don't think you need this function. I'd just write the <= (and not the ==
Done
Patch Set #6, Line 1092: addHeader := func(h *[][2]string, name, value string) {
Have you benchmarked this before and after? I'm not willing to add allocati
Yes. Benchcmp results are posted here: https://gist.github.com/appleby/cf50bc81fa87f0bbbf1b483628200cd0 Most of the additional allocations actually come from the strings.ToLower() call below, when normalizing header keys to check if the caller provided a "user-agent". See also my comment from Sept. 18th in this thread, which also links to benchmp results of avoiding that string normalization. I didn't know about benchstat when I originally posted those benchmarks, so the cpu numbers are iffy, but the allocations appear to be quite stable across runs. Let me know if you want to rerun the benchmark with benchstat / pprof, or if you prefer some other benchmark.
To view, visit this change. To unsubscribe, visit settings.
Brad Fitzpatrick posted comments on this change.
To view, visit this change. To unsubscribe, visit settings.
Mike Appleby uploaded patch set #8 to this change.
http2: Respect peer's SETTINGS_MAX_HEADER_LIST_SIZE in ClientConn Add a new peerMaxHeaderListSize() method to ClientConn which reports the SETTINGS_MAX_HEADER_LIST_SIZE requested by the client's peer, and respect this limit in (*ClientConn) encodeHeaders / encodeTrailers. Attempting to send more than peerMaxHeaderListSize bytes of headers or trailers will result in RoundTrip returning errRequestHeaderListSize. Updates golang/go#13959 Change-Id: Ic707179782acdf8ae543429ea1af7f4f30e67e59 --- M http2/http2.go M http2/transport.go M http2/transport_test.go 3 files changed, 488 insertions(+), 36 deletions(-)
To view, visit this change. To unsubscribe, visit settings.
Mike Appleby posted comments on this change.
Patch Set 8:
Patch set 8 removes the call to strings.ToLower to normalize header keys when counting bytes. That string normalization was causing a number of allocations linear in the size of req.Header.
Removing the normalization comes at the cost of getting the byte count wrong in cases where the caller provides a user-agent key in non-canonical form, in which case we will count the bytes for both the caller's user-agent and the default user-agent, effectively reducing the max header list size by 60 bytes. Even in such cases, only the caller's user-agent is actually encoded.
Benchcmp results comparing patch set 8 to tip can be found here:
https://gist.github.com/appleby/cf50bc81fa87f0bbbf1b483628200cd0
The remaining sources of new allocations in encodeHeaders are those related to the preHeaders/postHeaders slices. For the bare-bones GET requests in the benchmark I'm running, these slices result in an additional 7 allocations (~400 bytes) per request.
I don't have any great ideas for getting rid of these remaining allocations. I'm open to suggestions.
One option would be to replace the preHeaders / postHeaders slices with a single small slice with large-enough capacity. In other words, replace the current
addHeader := func(h *[][2]string, name, value string) {
*h = append(*h, [2]string{name, value})
...
}
var preHeaders [][2]string
...
var postHeaders [][2]stringwith
defaultHeaders := make([][2]string, 0, 10)
addHeader := func(name, value string) {
defaultHeaders = append(defaultHeaders, [2]string{name, value})
...
}With this change, the compiler is able to infer that defaultHeaders doesn't escape to the heap, and since the capacity is small, it's apparently stack allocated, at least on my linux/amd64 system. This seems like an implementation detail though, and not guaranteed to produce allocation-free code. It also uglies up the implementation slightly, since you now have to keep an index into defaultHeaders to keep track of where the "pre" headers end and the "post" headers start.
To view, visit this change. To unsubscribe, visit settings.
Mike Appleby uploaded patch set #9 to this change.
http2: Respect peer's SETTINGS_MAX_HEADER_LIST_SIZE in ClientConn
Add a new peerMaxHeaderListSize() method to ClientConn which reports the
SETTINGS_MAX_HEADER_LIST_SIZE requested by the client's peer, and
respect this limit in (*ClientConn) encodeHeaders / encodeTrailers.
Attempting to send more than peerMaxHeaderListSize bytes of headers or
trailers will result in RoundTrip returning errRequestHeaderListSize.
Updates golang/go#13959
Change-Id: Ic707179782acdf8ae543429ea1af7f4f30e67e59
---
M http2/http2.go
M http2/transport.go
M http2/transport_test.go
3 files changed, 488 insertions(+), 36 deletions(-)
To view, visit change 29243. To unsubscribe, visit settings.
Mike Appleby posted comments on this change.
Patch set 9:
Patch set 9 is a rebase onto http2 tip.
Benchsave results are here:
https://perf.golang.org/search?q=upload:20170407.2
Benchmarks were run on my laptop under perflock with cpu governor set to "performance", but a handful of tests still show high (> 10%) variance.
Copy/paste of benchsave results, hopefully formatting works out:
name old time/op new time/op delta
ServerGets-4 177µs ± 1% 177µs ± 0% ~ (p=0.776 n=48+45)
ServerPosts-4 195µs ± 0% 195µs ± 0% ~ (p=0.105 n=49+48)
ServerToClientStreamDefaultOptions-4 35.5µs ±16% 36.4µs ± 9% +2.56% (p=0.022 n=42+45)
ServerToClientStreamReuseFrames-4 35.1µs ±21% 35.4µs ±18% ~ (p=0.828 n=47+44)
Server_GetRequest-4 173µs ± 0% 173µs ± 0% ~ (p=0.671 n=48+47)
Server_PostRequest-4 194µs ± 0% 194µs ± 0% ~ (p=0.348 n=48+49)
ClientRequestHeaders/___0_Headers-4 154µs ± 1% 156µs ± 1% +1.33% (p=0.000 n=50+47)
ClientRequestHeaders/__10_Headers-4 171µs ± 1% 173µs ± 1% +1.22% (p=0.000 n=45+46)
ClientRequestHeaders/_100_Headers-4 302µs ± 2% 309µs ± 3% +2.30% (p=0.000 n=50+50)
ClientRequestHeaders/1000_Headers-4 4.09ms ±12% 4.10ms ±12% ~ (p=0.915 n=50+50)
name old alloc/op new alloc/op delta
ServerGets-4 4.36kB ± 0% 4.36kB ± 0% ~ (p=0.156 n=50+50)
ServerPosts-4 5.31kB ± 0% 5.31kB ± 0% ~ (p=0.299 n=38+50)
ServerToClientStreamDefaultOptions-4 877B ± 0% 878B ± 0% +0.14% (p=0.013 n=50+43)
ServerToClientStreamReuseFrames-4 828B ± 0% 829B ± 0% +0.12% (p=0.016 n=41+41)
Server_GetRequest-4 4.39kB ± 0% 4.39kB ± 0% ~ (p=0.783 n=50+50)
Server_PostRequest-4 5.29kB ± 0% 5.29kB ± 0% -0.05% (p=0.020 n=45+50)
ClientRequestHeaders/___0_Headers-4 4.42kB ± 0% 4.81kB ± 0% +8.83% (p=0.000 n=50+50)
ClientRequestHeaders/__10_Headers-4 6.14kB ± 0% 6.53kB ± 0% +6.33% (p=0.000 n=50+49)
ClientRequestHeaders/_100_Headers-4 30.0kB ± 0% 30.4kB ± 0% +1.29% (p=0.000 n=50+50)
ClientRequestHeaders/1000_Headers-4 343kB ± 0% 343kB ± 0% +0.12% (p=0.000 n=50+50)
name old allocs/op new allocs/op delta
ServerGets-4 65.0 ± 0% 65.0 ± 0% ~ (all equal)
ServerPosts-4 78.0 ± 0% 78.0 ± 0% ~ (all equal)
ServerToClientStreamDefaultOptions-4 17.0 ± 0% 17.0 ± 0% ~ (all equal)
ServerToClientStreamReuseFrames-4 16.0 ± 0% 16.0 ± 0% ~ (all equal)
Server_GetRequest-4 61.0 ± 0% 61.0 ± 0% ~ (all equal)
Server_PostRequest-4 74.0 ± 0% 74.0 ± 0% ~ (all equal)
ClientRequestHeaders/___0_Headers-4 51.0 ± 0% 58.0 ± 0% +13.73% (p=0.000 n=50+50)
ClientRequestHeaders/__10_Headers-4 83.0 ± 0% 90.0 ± 0% +8.43% (p=0.000 n=50+50)
ClientRequestHeaders/_100_Headers-4 365 ± 0% 372 ± 0% +1.92% (p=0.000 n=50+50)
ClientRequestHeaders/1000_Headers-4 5.12k ± 0% 5.13k ± 0% +0.14% (p=0.000 n=50+50)
Brad Fitzpatrick posted comments on this change.
Patch set 9:
Mike,
Sorry, I dropped the ball on reviewing this again and now there are merge conflicts.
Care to rebase this again? Sorry. :/
Mike Appleby posted comments on this change.
Patch set 9:
No problem. I'll try to get the rebase done in the next couple of days so the review can move forward. As long as the conflicts are easy to resolve, it shouldn't be a problem. Otherwise, I might not get to it until next week.
Mike Appleby uploaded patch set #10 to this change.
http2: Respect peer's SETTINGS_MAX_HEADER_LIST_SIZE in ClientConn
Add a new peerMaxHeaderListSize() method to ClientConn which reports the
SETTINGS_MAX_HEADER_LIST_SIZE requested by the client's peer, and
respect this limit in (*ClientConn) encodeHeaders / encodeTrailers.
Attempting to send more than peerMaxHeaderListSize bytes of headers or
trailers will result in RoundTrip returning errRequestHeaderListSize.
Updates golang/go#13959
Change-Id: Ic707179782acdf8ae543429ea1af7f4f30e67e59
---
M http2/http2.go
M http2/transport.go
M http2/transport_test.go
3 files changed, 488 insertions(+), 36 deletions(-)
To view, visit change 29243. To unsubscribe, visit settings.
Mike Appleby posted comments on this change.
Patch set 10:
Benchsave results are here:
https://perf.golang.org/search?q=upload:20170616.2
Benchmarks were run on my laptop with cpu frequency scaling set to "performance", but cpu numbers are pretty variable. I can try shutting down some background services and running again, if desired.
Copy/pasting benchsave results inline:
name old time/op new time/op delta
ServerGets-4 172µs ± 1% 172µs ± 1% ~ (p=0.115 n=46+47)
ServerPosts-4 196µs ± 7% 190µs ± 3% -2.76% (p=0.000 n=50+47)
ServerToClientStreamDefaultOptions-4 31.4µs ±16% 29.9µs ±26% ~ (p=0.146 n=44+44)
ServerToClientStreamReuseFrames-4 40.0µs ±93% 42.7µs ±86% ~ (p=0.444 n=50+50)
Server_GetRequest-4 183µs ±23% 186µs ±32% +1.37% (p=0.000 n=48+50)
Server_PostRequest-4 236µs ±20% 228µs ±18% -3.42% (p=0.000 n=50+50)
ClientRequestHeaders/___0_Headers-4 183µs ±22% 188µs ±21% ~ (p=0.894 n=50+50)
ClientRequestHeaders/__10_Headers-4 184µs ±30% 183µs ±28% -0.85% (p=0.006 n=50+50)
ClientRequestHeaders/_100_Headers-4 407µs ± 2% 403µs ± 2% -0.87% (p=0.000 n=45+38)
ClientRequestHeaders/1000_Headers-4 4.15ms ±66% 6.48ms ±72% +56.30% (p=0.005 n=40+50)
name old alloc/op new alloc/op delta
ServerGets-4 4.49kB ± 0% 4.50kB ± 0% +0.10% (p=0.009 n=50+45)
ServerPosts-4 5.44kB ± 0% 5.44kB ± 0% ~ (p=0.793 n=49+50)
ServerToClientStreamDefaultOptions-4 876B ± 1% 876B ± 0% ~ (p=0.687 n=47+43)
ServerToClientStreamReuseFrames-4 828B ± 1% 846B ±13% +2.17% (p=0.009 n=38+50)
Server_GetRequest-4 4.54kB ± 1% 4.55kB ± 1% ~ (p=0.980 n=48+50)
Server_PostRequest-4 5.47kB ± 1% 5.47kB ± 1% ~ (p=0.622 n=50+50)
ClientRequestHeaders/___0_Headers-4 4.55kB ± 0% 4.94kB ± 0% +8.56% (p=0.000 n=41+50)
ClientRequestHeaders/__10_Headers-4 6.28kB ± 0% 6.66kB ± 0% +6.15% (p=0.000 n=50+46)
ClientRequestHeaders/_100_Headers-4 30.3kB ± 0% 30.7kB ± 0% +1.28% (p=0.000 n=49+50)
ClientRequestHeaders/1000_Headers-4 343kB ± 0% 344kB ± 1% +0.34% (p=0.000 n=45+50)
name old allocs/op new allocs/op delta
ServerGets-4 66.0 ± 0% 66.0 ± 0% ~ (all equal)
ServerPosts-4 79.0 ± 0% 79.0 ± 0% ~ (all equal)
ServerToClientStreamDefaultOptions-4 17.0 ± 0% 17.0 ± 0% ~ (all equal)
ServerToClientStreamReuseFrames-4 16.0 ± 0% 16.0 ± 0% ~ (all equal)
Server_GetRequest-4 62.0 ± 0% 62.0 ± 0% ~ (all equal)
Server_PostRequest-4 75.0 ± 0% 75.0 ± 0% ~ (all equal)
ClientRequestHeaders/___0_Headers-4 52.0 ± 0% 59.0 ± 0% +13.46% (p=0.000 n=50+50)
ClientRequestHeaders/__10_Headers-4 84.0 ± 0% 91.0 ± 0% +8.33% (p=0.000 n=50+50)
ClientRequestHeaders/_100_Headers-4 365 ± 0% 372 ± 0% +1.92% (p=0.000 n=50+50)
ClientRequestHeaders/1000_Headers-4 5.09k ± 0% 5.10k ± 0% +0.22% (p=0.000 n=45+50)
(3 comments)
Patch Set #10, Line 1513: t.Errorf("peerMaxHeaderListSize = %v; want %v", cc.peerMaxHeaderListSize(), MaxHeaderListSizeUnlimited)
If I run just this test with `-count 2000`, then I get a handful (~1-4) of failures here:
--- FAIL: TestTransportChecksRequestHeaderListSize (0.02s)
transport_test.go:1513: peerMaxHeaderListSize = 25920; want 4294967295
I plan to dig into it this weekend, but didn't want to block the review over it.
In case anyone gets a chance to review it before I have time to look into it: could `GetClientConn` have returned a cached connection? Is that possible given I create a fresh Transport and call `defer tr.CloseIdleConnections()`, above? If so, how can I ensure I get an uncached connection?
Patch Set #10, Line 1582: topUpHeaders(t, req.Header, hSize, peerSize, filler)
The race detector warns of a data race with (*ClientConn).readLoop when modifying req.Header here.
Should I just create a new Request for each call to RoundTrip? Or is there some way to safely reuse the Request while modifying the headers?
Patch Set #10, Line 2020: want = tt.want
Mutex needed around `want` here? Better to use a channel? Race detector doesn't complain. My assumption was that the server handler (above) must return before RoundTrip does.
To view, visit change 29243. To unsubscribe, visit settings.
Tom Bergan posted comments on this change.
Patch set 10:
(9 comments)
Patch Set #6, Line 1092: return nil, err
Yes. Benchcmp results are posted here:
I posted a suggestion in the latest draft that eliminates all of the allocations except strings.ToLower. If those allocations are a concern, we can use strings.EqualFold for comparisons instead of strings.ToLower.
Patch Set #10, Line 174: peerMaxHeaderListSize_ uint32
no underscore in field names
Patch Set #10, Line 1116: // Do a first pass over the headers counting bytes to ensure
I think it would be simpler if the code looked something like the following. WDYT?
enumerateHeaders := func(f func(name, value) string) {
// old code with cc.writeHeader(name, value) calls replaced with f(name, value)
}// First pass.
var hlSize uint64
enumerateHeaders(func(name, value string) {
hf := hpack.HeaderField{...}
hsSlize += uint64(hf.Size())
})
if hlSize > max {
fail
}// Second pass.
enumerateHeaders(func(name, value string) {
cc.writeHeader(name, value)
})
Patch Set #10, Line 1156: if k == "User-Agent" || k == "user-agent" {
Can you explain why you added this? I don't see an equivalent of this in the original code, unless this is intended to be a poor-man's approximation of the strings.ToLower check? (Also see my above suggestion which might make this unnecessary.)
Patch Set #10, Line 1372: const topUpHeadersKey = "Top-Up"
Fill-Up? Padding?
It wasn't immediately obvious to me what "Top-Up" means.
Thus, the caller can later delete
// topUpHeadersKey from the header to return it to a state that contains
// fewer than limit bytes.
Well, not if limit-size is a multiple of sizeof(filler)? (Because in that case, Top-Up is not needed.)
Patch Set #10, Line 1382: func topUpHeaders(t *testing.T, h http.Header, size, limit uint64, filler string) {
padHeaders?
Patch Set #10, Line 1513: t.Errorf("peerMaxHeaderListSize = %v; want %v", cc.peerMaxHeaderListSize(), MaxHeaderListSizeUnlimited)
If I run just this test with `-count 2000`, then I get a handful (~1-4) of
I believe this should use a new connection.
Why would you expect this to be MaxHeaderListSizeUnlimited? I believe it's possible that the SETTINGS frame has already been received from the server by this point.
Patch Set #10, Line 1582: topUpHeaders(t, req.Header, hSize, peerSize, filler)
The race detector warns of a data race with (*ClientConn).readLoop when mod
A data race shouldn't happen here. That suggests you're not waiting for the request to finish?
To view, visit change 29243. To unsubscribe, visit settings.
Mike Appleby posted comments on this change.
Patch set 10:
(10 comments)
Patch Set #6, Line 1092: return nil, err
I posted a suggestion in the latest draft that eliminates all of the alloca
Ack
Patch Set #10, Line 174: peerMaxHeaderListSize_ uint32
no underscore in field names
What is the convention for avoiding a name collision with the method of the same name? Make the field name peerMaxHeaderListSz?
Patch Set #10, Line 1116: // Do a first pass over the headers counting bytes to ensure
I think it would be simpler if the code looked something like the following
Brilliant! I am in favor of this suggestion since I agree that it would improve readability, but it will also will result in number of additional allocations linear in the size of req.Header, due to the call to strings.ToLower on line 1126 of the original file.
I guess this is the call to strings.ToLower you propose to replace with strings.EqualFold? That would get us down to zero new allocations, and even reduce the number of allocations in cases where req.Header is non-empty; however, if we replace that call to strings.ToLower with calls to strings.EqualFold, we'll no longer be normalizing the key to lower-case before calling cc.writeHeader, which as I understand it is required by the RFC:
https://tools.ietf.org/html/rfc7540#section-8.1.2
I suppose I could add a "wantLowerCase" boolean arg to enumerateHeaders and only normalize the case when writing the headers, but not when counting bytes. Or else pass an additional func arg "normalizeKey" which is just identity at byte-counting time and strings.ToLower when encoding.
Any preference here? Should I just leave the call to strings.ToLower and accept the additional allocations when req.Header is non-empty? Or arrange to only lowercase the keys at encoding time?
Patch Set #10, Line 1156: if k == "User-Agent" || k == "user-agent" {
Can you explain why you added this? I don't see an equivalent of this in th
Yes, this was intended as a poor-man's strings.ToLower. I wasn't aware of strings.EqualFold, which is a much better suggestion, thanks!
Patch Set #10, Line 1372: const topUpHeadersKey = "Top-Up"
Fill-Up? Padding?
I agree that "topUp" is a terrible name. I'll switch to "pad" or "padding" everywhere.
Another option would be to delete all the "topUp" code and just test the max header size by stuffing a single large key/value pair into the req headers to push it over the limit. It seemed like a good idea to test many small headers as well as a single large one, but if we don't care about testing that case, I can just remove this code.
Thus, the caller can later delete
// topUpHeadersKey from the header to return it to a state that contains
// fewer than limit bytes.
Well, not if limit-size is a multiple of sizeof(filler)? (Because in that c
No, because I always reserve enough space for at least an
hpack.HeaderField{Name: topUpHeadersKey, Value: ""}Although I admit that the comment as written does not make that clear. Honestly, it's been so long since I wrote this, I wasn't sure myself!
As it turns out, I never even make use of this "feature" anywhere, so I'll just remove topUpHeadersKey altogether.
Also, as I mentioned in the previous comment, the fact that this function is confusing might be justification for removing it entirely.
Patch Set #10, Line 1382: func topUpHeaders(t *testing.T, h http.Header, size, limit uint64, filler string) {
padHeaders?
Ack
Patch Set #10, Line 1513: t.Errorf("peerMaxHeaderListSize = %v; want %v", cc.peerMaxHeaderListSize(), MaxHeaderListSizeUnlimited)
I believe this should use a new connection.
Good question! I guess I assumed that the connection wouldn't actually be established until the first request is made, below. In hindsight, that was a silly thing to assume.
The purpose of this test is just to check that a fresh ClientConn has the expected default value for peerMaxHeaderListSize. Should I just allocate a new ClientConn struct here without dialing a connection and check the default that way? Or else just drop this check completely?
Patch Set #10, Line 1582: topUpHeaders(t, req.Header, hSize, peerSize, filler)
A data race shouldn't happen here. That suggests you're not waiting for the
How can I ensure the request has finished? Currently, I call RoundTrip, then ioutil.ReadAll on the response body, and finally res.Body.Close().
Patch Set #10, Line 2020: want = tt.want
Mutex needed around `want` here? Better to use a channel? Race detector doe
Ping. Any thoughts on this? Ok as is?
To view, visit change 29243. To unsubscribe, visit settings.
Tom Bergan posted comments on this change.
Patch set 10:
A few comments now, more later.
(2 comments)
Patch Set #10, Line 1116: // Do a first pass over the headers counting bytes to ensure
Brilliant! I am in favor of this suggestion since I agree that it would imp
If we're worried about those allocations (I think we are, per Brad's comments), enumerateHeaders could use strings.EqualFold instead of ToLower, and the second pass could be:
enumerateHeaders(func(name, value string) {
cc.writeHeader(strings.ToLower(name), value)
})Patch Set #10, Line 1582: topUpHeaders(t, req.Header, hSize, peerSize, filler)
How can I ensure the request has finished? Currently, I call RoundTrip, the
What you're doing should ensure the request has finished. Can you copy the data race report here?
To view, visit change 29243. To unsubscribe, visit settings.
Mike Appleby posted comments on this change.
Patch set 10:
(2 comments)
Patch Set #10, Line 1116: // Do a first pass over the headers counting bytes to ensure
If we're worried about those allocations (I think we are, per Brad's comments), enumerateHeaders could use strings.EqualFold instead of ToLower, and the second pass could be:
enumerateHeaders(func(name, value string) {
cc.writeHeader(strings.ToLower(name), value)
})Indeed, this will work beautifully. I assumed it would generate extra allocations due to additional calls to strings.ToLower for all the default headers, but it seems ToLower is clever enough to not allocate in the case where it's input is already lowercase. Since all the default header keys are passed as lowercase string literals, this should be fine.
Patch Set #10, Line 1582: topUpHeaders(t, req.Header, hSize, peerSize, filler)
What you're doing should ensure the request has finished. Can you copy the data race report here?
Copy/pasted inline below. Here is a link to a github gist in case the inline formatting gets mangled:
https://gist.github.com/appleby/613bdaf89e10c3893dedc72f6dfd0192
[ma@march http2]? go test -run '^TestTransportChecksRequestHeaderListSize$' -race
==================
WARNING: DATA RACE
Write at 0x00c42017a038 by goroutine 7:
golang.org/x/net/http2.TestTransportChecksRequestHeaderListSize()
/home/ma/opt/godev/src/golang.org/x/net/http2/transport_test.go:1575 +0x99c
testing.tRunner()
/home/ma/src/repos/go/src/testing/testing.go:754 +0x16c
Previous read at 0x00c42017a038 by goroutine 13:
golang.org/x/net/http2.isConnectionCloseRequest()
/home/ma/opt/godev/src/golang.org/x/net/http2/transport.go:2207 +0x7b
golang.org/x/net/http2.(*clientConnReadLoop).endStreamError()
/home/ma/opt/godev/src/golang.org/x/net/http2/transport.go:1840 +0xe2
golang.org/x/net/http2.(*clientConnReadLoop).endStream()
/home/ma/opt/godev/src/golang.org/x/net/http2/transport.go:1829 +0x54
golang.org/x/net/http2.(*clientConnReadLoop).processData()
/home/ma/opt/godev/src/golang.org/x/net/http2/transport.go:1819 +0x1cb
golang.org/x/net/http2.(*clientConnReadLoop).run()
/home/ma/opt/godev/src/golang.org/x/net/http2/transport.go:1459 +0x9bd
golang.org/x/net/http2.(*ClientConn).readLoop()
/home/ma/opt/godev/src/golang.org/x/net/http2/transport.go:1351 +0x11c
Goroutine 7 (running) created at:
testing.(*T).Run()
/home/ma/src/repos/go/src/testing/testing.go:800 +0x597
testing.runTests.func1()
/home/ma/src/repos/go/src/testing/testing.go:1053 +0xb2
testing.tRunner()
/home/ma/src/repos/go/src/testing/testing.go:754 +0x16c
testing.runTests()
/home/ma/src/repos/go/src/testing/testing.go:1051 +0x50a
testing.(*M).Run()
/home/ma/src/repos/go/src/testing/testing.go:932 +0x206
main.main()
golang.org/x/net/http2/_test/_testmain.go:604 +0x1d3
Goroutine 13 (running) created at:
golang.org/x/net/http2.(*Transport).newClientConn()
/home/ma/opt/godev/src/golang.org/x/net/http2/transport.go:534 +0xe1a
golang.org/x/net/http2.(*Transport).dialClientConn()
/home/ma/opt/godev/src/golang.org/x/net/http2/transport.go:411 +0x191
golang.org/x/net/http2.(*dialCall).dial()
/home/ma/opt/godev/src/golang.org/x/net/http2/client_conn_pool.go:108 +0x80
==================
==================
WARNING: DATA RACE
Write at 0x00c4201862a0 by goroutine 7:
runtime.mapassign_faststr()
/home/ma/src/repos/go/src/runtime/hashmap_fast.go:598 +0x0
net/textproto.MIMEHeader.Add()
/home/ma/src/repos/go/src/net/textproto/header.go:15 +0x15b
net/http.Header.Add()
/home/ma/src/repos/go/src/net/http/header.go:24 +0x60
golang.org/x/net/http2.topUpHeaders()
/home/ma/opt/godev/src/golang.org/x/net/http2/transport_test.go:1409 +0x402
golang.org/x/net/http2.TestTransportChecksRequestHeaderListSize()
/home/ma/opt/godev/src/golang.org/x/net/http2/transport_test.go:1582 +0xb09
testing.tRunner()
/home/ma/src/repos/go/src/testing/testing.go:754 +0x16c
Previous read at 0x00c4201862a0 by goroutine 13:
runtime.mapaccess1_faststr()
/home/ma/src/repos/go/src/runtime/hashmap_fast.go:208 +0x0
golang.org/x/net/http2.isConnectionCloseRequest()
/home/ma/opt/godev/src/golang.org/x/net/http2/transport.go:2207 +0xae
golang.org/x/net/http2.(*clientConnReadLoop).endStreamError()
/home/ma/opt/godev/src/golang.org/x/net/http2/transport.go:1840 +0xe2
golang.org/x/net/http2.(*clientConnReadLoop).endStream()
/home/ma/opt/godev/src/golang.org/x/net/http2/transport.go:1829 +0x54
golang.org/x/net/http2.(*clientConnReadLoop).processData()
/home/ma/opt/godev/src/golang.org/x/net/http2/transport.go:1819 +0x1cb
golang.org/x/net/http2.(*clientConnReadLoop).run()
/home/ma/opt/godev/src/golang.org/x/net/http2/transport.go:1459 +0x9bd
golang.org/x/net/http2.(*ClientConn).readLoop()
/home/ma/opt/godev/src/golang.org/x/net/http2/transport.go:1351 +0x11c
Goroutine 7 (running) created at:
testing.(*T).Run()
/home/ma/src/repos/go/src/testing/testing.go:800 +0x597
testing.runTests.func1()
/home/ma/src/repos/go/src/testing/testing.go:1053 +0xb2
testing.tRunner()
/home/ma/src/repos/go/src/testing/testing.go:754 +0x16c
testing.runTests()
/home/ma/src/repos/go/src/testing/testing.go:1051 +0x50a
testing.(*M).Run()
/home/ma/src/repos/go/src/testing/testing.go:932 +0x206
main.main()
golang.org/x/net/http2/_test/_testmain.go:604 +0x1d3
Goroutine 13 (running) created at:
golang.org/x/net/http2.(*Transport).newClientConn()
/home/ma/opt/godev/src/golang.org/x/net/http2/transport.go:534 +0xe1a
golang.org/x/net/http2.(*Transport).dialClientConn()
/home/ma/opt/godev/src/golang.org/x/net/http2/transport.go:411 +0x191
golang.org/x/net/http2.(*dialCall).dial()
/home/ma/opt/godev/src/golang.org/x/net/http2/client_conn_pool.go:108 +0x80
==================
--- FAIL: TestTransportChecksRequestHeaderListSize (0.07s)
testing.go:707: race detected during execution of test
FAIL
exit status 1
FAIL golang.org/x/net/http2 0.099s
To view, visit change 29243. To unsubscribe, visit settings.
(1 comment)
Thus, the caller can later delete
// topUpHeadersKey from the header to return it to a state that contains
// fewer than limit bytes.
No, because I always reserve enough space for at least an
Also, not sure why I have the current size passed in by the caller, rather than having the function calculate current size at the start. I'm sure it made sense to me at the time!
So I'll remove the size argument, as well. Assuming we don't want to just delete the function.
To view, visit change 29243. To unsubscribe, visit settings.
Patch Set 10:
(2 comments)
(1 comment)
Patch Set #10, Line 1582: topUpHeaders(t, req.Header, hSize, peerSize, filler)
> What you're doing should ensure the request has finished. Can you copy th
I think I found the source of the race. The method (*clientConnReadLoop) endStreamError first writes EOF to the clientStream's bufPipe. At this point, the ioutil.ReadAll(res.Body) and res.Body.Close() in my test return. Concurrently, I modify req.Header in my test and endStreamError calls isConnectionCloseRequest, which reads from req.Header. Does that sound right?
Here are definitions for endStreamError & isConnectionCloseRequest (comments are mine):
func (rl *clientConnReadLoop) endStreamError(cs *clientStream, err error) {
var code func()
if err == nil {
err = io.EOF
code = cs.copyTrailers
}
cs.bufPipe.closeWithErrorAndCode(err, code)// res.Body.Close returns in my test goroutine,
// which goes on to modify req.Header
delete(rl.activeRes, cs.ID)
if isConnectionCloseRequest(cs.req) { // req.Header read here
rl.closeWhenIdle = true
}
select {
case cs.resc <- resAndError{err: err}:
default:
}
}func isConnectionCloseRequest(req *http.Request) bool {
return req.Close || httplex.HeaderValuesContainsToken(req.Header["Connection"], "close")
}
Perhaps I'm trying to do something unsupported? Should I just create a new request for each RoundTrip?
Note that if I modify endStreamError to do the isConnectionCloseRequest check before the cs.bufPipe.closeWithErrorAndCode, then the race goes away and it doesn't appear to break anything (all tests in package passing). Not sure if that's the correct fix, however.
To view, visit change 29243. To unsubscribe, visit settings.
(1 comment)
Patch Set #10, Line 1582: topUpHeaders(t, req.Header, hSize, peerSize, filler)
I think I found the source of the race. The method (*clientConnReadLoop) en
Ping
To view, visit change 29243. To unsubscribe, visit settings.
Tom Bergan posted comments on this change.
Patch set 10:
Sorry for the delay. I'm about to go away for a couple days. I'll take a look next week.
Very sorry for the delay. Let me know when this is ready for another look.
3 comments:
Patch Set #10, Line 174: peerMaxHeaderListSize_ uint32
What is the convention for avoiding a name collision with the method of the
You don't need that method. You can use a default value (anything) when the ClientConn is created (near line 506), then from that point, the zero value is not special.
Patch Set #10, Line 1116: // Do a first pass over the headers counting bytes to ensure
> If we're worried about those allocations (I think we are, per Brad's comm
Sorry if this was asked previously, I'm slowly paging in context for this change.
Have you given thought to doing this in Framer (frame.go) so that the code can be reused in the server? Also see the code in write.go, which is used to encode headers for the server.
Patch Set #10, Line 1582: topUpHeaders(t, req.Header, hSize, peerSize, filler)
Ping
You did find the race, thanks! Filed https://github.com/golang/go/issues/21316
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
No problem. I'm sure reviewing this changelist isn't at the top of anyone's TODO list. Plus, I've seen the number of changelists that get mailed, and I sympathize :).
When you get a chance, maybe just respond to the comments below; then I'll rebase and send a new patch set incorporating all the feedback so far.
2 comments:
Patch Set #10, Line 174: peerMaxHeaderListSize_ uint32
You don't need that method. You can use a default value (anything) when the
What if the peer sends a settings frame explicitly requesting max header list size of 0 (unlimited)? Admittedly that would be strange, since unlimited is the default. It's been a while since I read the RFC, but on a quick re-skim it doesn't seem to be explicitly prohibited.
I could do the normalization at the time I store the value in (*clientConnReadLoop) processSettings, like so
err := f.ForeachSetting(func(s Setting) error {
switch s.ID {
...
case SettingMaxHeaderListSize:
if s.Val == 0 {
cc.peerMaxHeaderListSize = MaxHeaderListSizeUnlimited
} else {
cc.peerMaxHeaderListSize = s.Val
}
...
Or I could get rid of the getter method, let zero represent the unlimited value (as it does on the wire), and just add an explicit zero check everywhere I read the value, i.e.
if cc.peerMaxHeadListSize != 0 && headerBytes > cc.peerMaxHeaderListSize {
// error
}I went with adding a getter method for symmetry with (*Transport) maxHeaderListSize. That is, there is a single value (MaxHeaderListSizeUnlimited in this changelist) that represents the unlimited value for both (*Transport) maxHeaderListSize and (*ClientConn) peerMaxHeaderListSize.
Let me know what you prefer.
Patch Set #10, Line 1116: // Do a first pass over the headers counting bytes to ensure
Sorry if this was asked previously, I'm slowly paging in context for this c
My original intention was to get this changelist reviewed to vet the basic approach, then make the corresponding changes in write.go for the server side. In hindsight, I probably should have been optimizing for fewer code review cycles, rather than a smaller changelist/quicker feedback.
At the time I originally mailed this changelist, there were only a handful of places that encKV was called outside of encodeHeaders in write.go, and I thought it would be possible to take a similar approach to the one presented here in (*ClientConn) encodeHeaders.
Looking a write.go today, I see it now has a few additional places where it encodes headers with encKV outside of the encodeHeaders function. Maybe there is no single convenient place in write.go to do the byte accounting the way there is on the client side. I'd have to spend some time looking in to write.go again to see how to best handle it for the server side.
It would be great if there were a single a place to do the byte accounting that works for both client and server. Maybe Framer is that place. Do you have a particular approach in mind? Brad mentions in the github issue[1] that it's important to catch the error before encoding the headers, to prevent modifying the hpack.Encoder state. Otherwise, he mentions another approach of snapshotting the hpack state and restoring it on error, but I'm not sure what that would entail or whether that could be done without introducing any new allocations. Also note that the max header list size is based on the uncompressed size of header fields. Would Framer have access to the uncompressed size?
As things stand, my plan was to try a similar byte-counting-before-hpack-encoding approach in write.go. If you think moving the header list size check into frame.go is a better approach and compatible with the above caveats, let me know and I'll look into it. Not sure I'll have time to implement it that way, but I can at least spend a day or two looking into it.
[1]: https://github.com/golang/go/issues/13959#issuecomment-220222610
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
2 comments:
Patch Set #10, Line 174: peerMaxHeaderListSize_ uint32
What if the peer sends a settings frame explicitly requesting max header list size of 0 (unlimited)?
The RFC does not give "0" special semantics, those are special semantics we apply to the Go Transport config.
http://httpwg.org/specs/rfc7540.html#SettingValues
http://httpwg.org/specs/rfc7540.html#MaxHeaderBlock
If the peer sends SETTINGS_MAX_HEADER_LIST_SIZE=0, then I guess the peer does not want to receive headers? I can't imagine a peer ever sending that.
Patch Set #10, Line 1116: // Do a first pass over the headers counting bytes to ensure
Brad mentions in the github issue[1] that it's important to catch the error before encoding the headers, to prevent modifying the hpack.Encoder state.
The above quote is the most important restriction (necessary for correctness).
I don't have anything specific in mind. Snapshotting would require an allocation, as you say, so it seems off the table unless you can come up with an elegant copy-on-write optimization. My only other thought is to give Framer a "dry run" mode. But that doesn't seem like much of an improvement over what you have here -- in both cases, the caller would need to encode headers twice (once for the dry run, once to actually encode).
What you're doing here seems fine IMO.
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
Mike Appleby uploaded patch set #11 to this change.
http2: Respect peer's SETTINGS_MAX_HEADER_LIST_SIZE in ClientConn
Add a new peerMaxHeaderListSize member to ClientConn which records the
SETTINGS_MAX_HEADER_LIST_SIZE requested by the client's peer, and
respect this limit in (*ClientConn) encodeHeaders / encodeTrailers.
Attempting to send more than peerMaxHeaderListSize bytes of headers or
trailers will result in RoundTrip returning errRequestHeaderListSize.
Updates golang/go#13959
Change-Id: Ic707179782acdf8ae543429ea1af7f4f30e67e59
---
M http2/transport.go
M http2/transport_test.go
2 files changed, 426 insertions(+), 60 deletions(-)
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
Benchsave resutls are here:
https://perf.golang.org/search?q=upload:20170814.1
Benchmarks were run on my laptop and CPU numbers are not very stable.
Copy/pasting benchstat results inline:
$ benchsave http2-tip-1c05540.txt http2-cl-29243-patch-set-11.txt
name old time/op new time/op delta
ServerGets-4 172µs ± 1% 172µs ± 2% +0.15% (p=0.001 n=194+203)
ServerPosts-4 319µs ±45% 322µs ±52% ~ (p=0.946 n=250+250)
ServerToClientStreamDefaultOptions-4 54.4µs ±60% 54.8µs ±59% ~ (p=0.783 n=250+250)
ServerToClientStreamReuseFrames-4 54.7µs ±59% 54.5µs ±60% ~ (p=0.851 n=250+250)
Server_GetRequest-4 292µs ±50% 289µs ±43% -0.94% (p=0.000 n=250+250)
Server_PostRequest-4 340µs ±45% 339µs ±45% -0.26% (p=0.004 n=250+250)
ClientRequestHeaders/___0_Headers-4 231µs ±53% 231µs ±51% +0.34% (p=0.001 n=250+250)
ClientRequestHeaders/__10_Headers-4 265µs ±53% 269µs ±53% +1.40% (p=0.000 n=250+250)
ClientRequestHeaders/_100_Headers-4 475µs ±52% 509µs ±54% +7.09% (p=0.000 n=250+250)
ClientRequestHeaders/1000_Headers-4 7.41ms ±53% 7.09ms ±60% ~ (p=0.570 n=250+250)
name old alloc/op new alloc/op delta
ServerGets-4 4.49kB ± 0% 4.49kB ± 1% -0.06% (p=0.008 n=179+217)
ServerPosts-4 5.63kB ± 4% 5.64kB ± 4% ~ (p=0.401 n=250+250)
ServerToClientStreamDefaultOptions-4 918B ±10% 916B ±10% ~ (p=0.786 n=250+250)
ServerToClientStreamReuseFrames-4 870B ±10% 869B ±10% ~ (p=0.618 n=250+250)
Server_GetRequest-4 4.89kB ± 8% 4.87kB ± 8% ~ (p=0.208 n=250+250)
Server_PostRequest-4 5.66kB ± 4% 5.66kB ± 4% ~ (p=0.813 n=250+250)
ClientRequestHeaders/___0_Headers-4 4.58kB ± 0% 4.62kB ± 0% +0.94% (p=0.000 n=247+245)
ClientRequestHeaders/__10_Headers-4 6.31kB ± 0% 6.34kB ± 0% +0.58% (p=0.000 n=245+248)
ClientRequestHeaders/_100_Headers-4 30.3kB ± 0% 30.4kB ± 0% +0.12% (p=0.000 n=249+250)
ClientRequestHeaders/1000_Headers-4 344kB ± 1% 344kB ± 1% ~ (p=0.633 n=250+250)
name old allocs/op new allocs/op delta
ServerGets-4 66.0 ± 0% 66.0 ± 0% ~ (all equal)
ServerPosts-4 79.0 ± 0% 79.0 ± 0% ~ (all equal)
ServerToClientStreamDefaultOptions-4 17.0 ± 0% 17.0 ± 0% ~ (all equal)
ServerToClientStreamReuseFrames-4 16.0 ± 0% 16.0 ± 0% ~ (all equal)
Server_GetRequest-4 62.0 ± 0% 62.0 ± 0% ~ (all equal)
Server_PostRequest-4 75.0 ± 0% 75.0 ± 0% ~ (all equal)
ClientRequestHeaders/___0_Headers-4 54.0 ± 0% 57.0 ± 0% +5.56% (p=0.000 n=250+250)
ClientRequestHeaders/__10_Headers-4 86.0 ± 0% 89.0 ± 0% +3.49% (p=0.000 n=250+250)
ClientRequestHeaders/_100_Headers-4 367 ± 0% 370 ± 0% +0.82% (p=0.000 n=250+250)
ClientRequestHeaders/1000_Headers-4 5.10k ± 0% 5.10k ± 0% +0.06% (p=0.000 n=250+249)
9 comments:
Patch Set #10, Line 174: // Settings from peer: (also guarded by mu)
> What if the peer sends a settings frame explicitly requesting max header
Done
Patch Set #10, Line 1116: // if the stream is dead.
> Brad mentions in the github issue[1] that it's important to catch the err
Done
Patch Set #10, Line 1156: cc.hbuf.Reset()
Yes, this was intended as a poor-man's strings.ToLower. I wasn't aware of s
I agree that "topUp" is a terrible name. I'll switch to "pad" or "padding"
Done
hf := hpack.HeaderField{Name: k, Value: v}
size += hf.Size()
Also, not sure why I have the current size passed in by the caller, rather
Done
Ack
Done
Good question! I guess I assumed that the connection wouldn't actually be e
Done
Patch Set #10, Line 1582: // peerMaxHeaderListSize.
You did find the race, thanks! Filed https://github.com/golang/go/issues/21
Ping. Any thoughts on this? Ok as is?
This test has been removed, since the latest patch set doesn't alter how the User-Agent key is handled.
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
Haven't had a chance to look at the tests yet.
Patch set 11:Run-TryBot +1
2 comments:
Patch Set #11, Line 177: // zero means "unlimited"
This is not in the H2 spec, please remove. If the peer sends SETTINGS_MAX_HEADER_LIST_SIZE=0, they don't mean "unlimited", they mean 0 bytes, which is insane, but we should respect it anyway.
Patch Set #11, Line 1092: cc.writeStreamReset(cs.ID, ErrCodeInternal, err)
Need to call cc.forgetStreamID(cs.ID) after cc.writeStreamReset. This is ugly and we need to clean it up eventually, but for now, we have to copy-paste those calls.
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
TryBots beginning. Status page: https://farmer.golang.org/try?commit=97a7d7b1
TryBots are happy.
Patch set 11:TryBot-Result +1
2 comments:
Patch Set #11, Line 177: // zero means "unlimited"
This is not in the H2 spec, please remove. […]
But the spec does say that the default value is unlimited, so we either need some way to represent the unlimited value or else ignore that part of the spec. The following are the options I can come up with. Let me know what you prefer.
1. Ignore the "unlimited" default and invent some concrete default for peerMaxHeaderListSize, say UINT32_MAX. For most (all?) real programs, a default of UINT32_MAX would be as good as unlimited. However, using a concrete value as the default rather than having a special "unlimited" value means that we always have to count the bytes; we don't get to skip counting bytes for the default "unlimited" case. In other words this
if peerMaxheaderListSize != unlimited {
// count bytes
if size > peerMaxHeaderListSize {
// error
}
}
// encode headerswould have to change to this
// always count bytes
if size > peerMaxHeaderListSize {
// error
}
// encode headers
Maybe the peer neglecting to send SETTINGS_MAX_HEADER_LIST_SIZE is rare enough that we don't care about this optimization?
2. Add a boolean flag to ClientConn, something like haveRecievedPeerMaxHeaderListSize. False means unlimited, True means use the value in peerMaxHeaderListSize. This allows to have a special "unlimited" default value while still representing all peer-requested limits exactly. Seemed like overkill though.
3. Use some sentinel value to mean unlimited. The obvious candidates are zero or UINT32_MAX. Using UINT32_MAX has the advantage that it's "less wrong" and probably sufficiently large to be indistinguishable from unlimited for most real programs; whereas using zero has the advantage that ClientConns constructed via a struct literal rather than calling NewClientConn get the correct default value. For example, when I first made the changes for this patch set, I used UINT32_MAX as the "unlimited" value, assigning the default in newClientConn. But that broke one of the tests, TestTransportRequestPathPseudo, which creates a &ClientConn{} and proceeds to call encodeHeaders on it. It's easy enough to update the test of course, but that made me nervous about potentially breaking user code, so I switched to using zero to represent "unlimited".
Patch Set #11, Line 1092: cc.writeStreamReset(cs.ID, ErrCodeInternal, err)
Need to call cc.forgetStreamID(cs.ID) after cc.writeStreamReset. […]
Will do. Thanks for catching that.
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
1 comment:
Patch Set #11, Line 177: // zero means "unlimited"
Good point about the default unlimited. I don't like using the sentinel zero in this way because that value is legal (if unlikely) for a client to send but does not have the same semantics as this implementation. How about:
4. Make this an uint64 and use UINT64_MAX as unlimited.
5. Make this a *uint32 and use nil as unlimited.
6. Make this a *uint64 and use nil as unlimited.
But that broke one of the tests, TestTransportRequestPathPseudo, which creates a &ClientConn{} ... that made me nervous about potentially breaking user code
User code outside this package cannot use &ClientConn{}, because they have no way to set fields like t or tconn without calling Transport.NewClientConn, so I'm not concerned about that breaking user code.
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
1 comment:
Patch Set #11, Line 177: // zero means "unlimited"
4. Make this an uint64 and use UINT64_MAX as unlimited.
5. Make this a *uint32 and use nil as unlimited.
Both of these are nice options. Using a uint64 avoids nil checks and avoids a cast to uint64 when comparing against hlSize, which is declared as uint64. On the other hand, using a *uint32 perfectly captures the semantics of set/unset value and might save a few bytes on 32-bit architectures. I'll pick one of these options and post a new patch set soon.
User code outside this package cannot use &ClientConn{}, because they have no way to set fields like t or tconn without calling Transport.NewClientConn, so I'm not concerned about that breaking user code.
That's a relief :).
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
1 comment:
Patch Set #11, Line 177: // zero means "unlimited"
> 4. Make this an uint64 and use UINT64_MAX as unlimited. […]
There's also this if you like it better:
6. *uint64
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
1 comment:
Patch Set #11, Line 177: // zero means "unlimited"
There's also this if you like it better: […]
In the process of implementing this change, I realized why I went with zero == unlimited in the first place. I was taking a cue from the comment on serverConn.peerMaxHeaderListSize, which already exists but is not yet enforced (I think):
peerMaxHeaderListSize uint32 // zero means unknown (default)
Of course, we could also change that to a *uint32 or similar when the time comes to implement the server side of this.
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
Mike Appleby uploaded patch set #12 to this change.
http2: Respect peer's SETTINGS_MAX_HEADER_LIST_SIZE in ClientConn
Add a new peerMaxHeaderListSize member to ClientConn which records the
SETTINGS_MAX_HEADER_LIST_SIZE requested by the client's peer, and
respect this limit in (*ClientConn) encodeHeaders / encodeTrailers.
Attempting to send more than peerMaxHeaderListSize bytes of headers or
trailers will result in RoundTrip returning errRequestHeaderListSize.
Updates golang/go#13959
Change-Id: Ic707179782acdf8ae543429ea1af7f4f30e67e59
---
M http2/transport.go
M http2/transport_test.go
2 files changed, 432 insertions(+), 61 deletions(-)
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
Benchsave results for patch set 12 are here:
https://perf.golang.org/search?q=upload:20170817.1
$ benchsave http2-tip-1c05540.txt http2-cl-29243-patch-set-12.txt
name old time/op new time/op delta
ServerGets-4 172µs ± 1% 172µs ± 1% -0.29% (p=0.000 n=194+233)
ServerPosts-4 319µs ±45% 292µs ±59% -8.50% (p=0.000 n=250+250)
ServerToClientStreamDefaultOptions-4 54.4µs ±60% 51.3µs ±57% -5.77% (p=0.035 n=250+250)
ServerToClientStreamReuseFrames-4 54.7µs ±59% 53.0µs ±58% ~ (p=0.312 n=250+250)
Server_GetRequest-4 292µs ±50% 281µs ±53% -3.89% (p=0.015 n=250+250)
Server_PostRequest-4 340µs ±45% 322µs ±42% -5.38% (p=0.000 n=250+250)
ClientRequestHeaders/___0_Headers-4 231µs ±53% 229µs ±51% -0.53% (p=0.000 n=250+250)
ClientRequestHeaders/__10_Headers-4 265µs ±53% 274µs ±62% +3.14% (p=0.000 n=250+250)
ClientRequestHeaders/_100_Headers-4 475µs ±52% 492µs ±57% +3.47% (p=0.000 n=250+250)
ClientRequestHeaders/1000_Headers-4 7.41ms ±53% 6.72ms ±72% ~ (p=0.218 n=250+250)
name old alloc/op new alloc/op delta
ServerGets-4 4.49kB ± 0% 4.49kB ± 0% ~ (p=0.170 n=179+237)
ServerPosts-4 5.63kB ± 4% 5.60kB ± 3% ~ (p=0.122 n=250+250)
ServerToClientStreamDefaultOptions-4 918B ±10% 912B ±10% ~ (p=0.081 n=250+250)
ServerToClientStreamReuseFrames-4 870B ±10% 865B ±11% ~ (p=0.190 n=250+250)
Server_GetRequest-4 4.89kB ± 8% 4.85kB ± 7% -0.79% (p=0.043 n=250+250)
Server_PostRequest-4 5.66kB ± 4% 5.64kB ± 4% ~ (p=0.098 n=250+250)
ClientRequestHeaders/___0_Headers-4 4.58kB ± 0% 4.62kB ± 0% +0.93% (p=0.000 n=247+249)
ClientRequestHeaders/__10_Headers-4 6.31kB ± 0% 6.34kB ± 0% +0.60% (p=0.000 n=245+250)
ClientRequestHeaders/_100_Headers-4 30.3kB ± 0% 30.4kB ± 0% +0.12% (p=0.000 n=249+250)
ClientRequestHeaders/1000_Headers-4 344kB ± 1% 343kB ± 1% -0.20% (p=0.000 n=250+193)
name old allocs/op new allocs/op delta
ServerGets-4 66.0 ± 0% 66.0 ± 0% ~ (all equal)
ServerPosts-4 79.0 ± 0% 79.0 ± 0% ~ (all equal)
ServerToClientStreamDefaultOptions-4 17.0 ± 0% 17.0 ± 0% ~ (all equal)
ServerToClientStreamReuseFrames-4 16.0 ± 0% 16.0 ± 0% ~ (all equal)
Server_GetRequest-4 62.0 ± 0% 62.0 ± 0% ~ (all equal)
Server_PostRequest-4 75.0 ± 0% 75.0 ± 0% ~ (all equal)
ClientRequestHeaders/___0_Headers-4 54.0 ± 0% 57.0 ± 0% +5.56% (p=0.000 n=250+250)
ClientRequestHeaders/__10_Headers-4 86.0 ± 0% 89.0 ± 0% +3.49% (p=0.000 n=250+250)
ClientRequestHeaders/_100_Headers-4 367 ± 0% 370 ± 0% +0.82% (p=0.000 n=250+250)
ClientRequestHeaders/1000_Headers-4 5.10k ± 0% 5.10k ± 0% +0.05% (p=0.000 n=250+246)
4 comments:
Patch Set #11, Line 177: // nil means "unlimited"
In the process of implementing this change, I realized why I went with zero == unlimited in the firs […]
Done
Patch Set #11, Line 1092: cc.writeStreamReset(cs.ID, ErrCodeInternal, err)
Will do. Thanks for catching that.
Done
Patch Set #12, Line 1090: cc.mu.Unlock()
Safe to release cc.mu here? I removed the defer because cc.forgetStreamID also attempts to acquire cc.mu (via cc.streamByID). It looks like holding cc.wmu is sufficient for the code that follows?
Patch Set #12, Line 1953: cc.peerMaxHeaderListSize = &s.Val
Is this safe? Or should I allocate a new(uint32) and copy the value?
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
Code basically looks good, just a couple nits and a question about the test for trailers.
Patch set 12:Run-TryBot +1
6 comments:
Patch Set #11, Line 177: // nil means "unlimited"
Done
Sorry for the churn, I changed my mind ... can you change this to uint64 to avoid the allocation? UINT64_MAX would mean "unlimited" (although you wouldn't have to say that in the comment, it would be implicit)
Patch Set #12, Line 1090: cc.mu.Unlock()
Safe to release cc.mu here? I removed the defer because cc. […]
This looks fine to me. To double-check, can you run all tests with the race detector?
Patch Set #12, Line 1495: 16 * kb
nit: 16 << 10 instead of defining "kb" (16 << 10 is the idiomatic way to say "16 kb")
checkResponseOK := func(res *http.Response, desc string) {
if res == nil {
t.Errorf("%v: response nil; want non-nil.", desc)
return
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
t.Errorf("%v: response status = %v; want %v", desc, res.StatusCode, http.StatusOK)
}
body, err := ioutil.ReadAll(res.Body)
if err != nil {
t.Errorf("%v: error reading response body: %v", desc, err)
}
if string(body) != responseBody {
t.Errorf("%v: response body = %q; want %q", desc, body, responseBody)
}
}
Do we need to check all this? IIUC, we should only need to check the headers since that's what is encoded. And possibly the trailers.
Patch Set #12, Line 1603: // set for MaxHeaderBytes, above.
Why should it be "close to" instead of "exactly"?
Patch Set #12, Line 1631: checkRoundTrip(req, errRequestHeaderListSize, "Trailers over limit")
I'm surprised this works? I'd expect to get the error after reading some of the response body, not immediately after calling RoundTrip.
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
TryBots beginning. Status page: https://farmer.golang.org/try?commit=8b1e4d92
TryBots are happy.
Patch set 12:TryBot-Result +1
6 comments:
Patch Set #11, Line 177: // nil means "unlimited"
Sorry for the churn, I changed my mind ... can you change this to uint64 to avoid the allocation?
Will do.
Patch Set #12, Line 1090: cc.mu.Unlock()
This looks fine to me. To double-check, can you run all tests with the race detector?
I did. The race detector doesn't complain.
Patch Set #12, Line 1495: 16 * kb
nit: 16 << 10 instead of defining "kb" (16 << 10 is the idiomatic way to say "16 kb")
Ack.
checkResponseOK := func(res *http.Response, desc string) {
if res == nil {
t.Errorf("%v: response nil; want non-nil.", desc)
return
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
t.Errorf("%v: response status = %v; want %v", desc, res.StatusCode, http.StatusOK)
}
body, err := ioutil.ReadAll(res.Body)
if err != nil {
t.Errorf("%v: error reading response body: %v", desc, err)
}
if string(body) != responseBody {
t.Errorf("%v: response body = %q; want %q", desc, body, responseBody)
}
}
Do we need to check all this?
Probably not. I added this as a sanity check to make sure I was getting the response I expected and hadn't inadvertently broken something. Do you think it makes sense to at least verify that we get an http.StatusOK, or should I just remove all of it?
IIUC, we should only need to check the headers since that's what is encoded. And possibly the trailers.
You mean to verify that the headers that actually get encoded are a superset of req.Header? Would it make sense to do this check in the server handler, or do you prefer that I cut out the RoundTrip and just call encodeHeaders/encodeTrailers directly and verify the results? Or do you have something else in mind?
Patch Set #12, Line 1603: // set for MaxHeaderBytes, above.
Why should it be "close to" instead of "exactly"?
Because (*serverConn) maxHeaderListSize adds 320 bytes of padding to the value. I should probably add a comment to explain this.
func (sc *serverConn) maxHeaderListSize() uint32 {
n := sc.hs.MaxHeaderBytes
if n <= 0 {
n = http.DefaultMaxHeaderBytes
}
// http2's count is in a slightly different unit and includes 32 bytes per pair.
// So, take the net/http.Server value and pad it up a bit, assuming 10 headers.
const perFieldOverhead = 32 // per http2 spec
const typicalHeaders = 10 // conservative
return uint32(n + typicalHeaders*perFieldOverhead)
}I could do an exact test for MaxHeaderBytes + 320, but I wanted to leave some leeway for the padding to change. This test is just a sanity check to make sure we get something close to the 16kb we specified above and not something outrageous like UINT32_MAX since the tests that follow actually attempt to allocate peerSize bytes of headers/trailers.
I chose the +/- 1kb range arbitrarily. It could just as well be +/- 25% or anything else. I suppose it's unlikely that the server would ever request fewer than the configured MaxHeaderBytes, so the lower bound could also be MaxHeaderBytes.
Patch Set #12, Line 1631: checkRoundTrip(req, errRequestHeaderListSize, "Trailers over limit")
I'm surprised this works? I'd expect to get the error after reading some of the response body, not immediately after calling RoundTrip.
The server handler consumes the request body before writing any response, which forces the client to send the trailers. At least, that was the intention! Also, checkRoundTrip calls checkResponseOK, which consumes the server's response body.
For what it's worth, I ran this test with -count 40000 on my laptop and didn't get any failures. Doesn't mean it won't start flaking when run on a build server or trybot though :).
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
Looks good, just a few nits left.
3 comments:
checkResponseOK := func(res *http.Response, desc string) {
if res == nil {
t.Errorf("%v: response nil; want non-nil.", desc)
return
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
t.Errorf("%v: response status = %v; want %v", desc, res.StatusCode, http.StatusOK)
}
body, err := ioutil.ReadAll(res.Body)
if err != nil {
t.Errorf("%v: error reading response body: %v", desc, err)
}
if string(body) != responseBody {
t.Errorf("%v: response body = %q; want %q", desc, body, responseBody)
}
}
Do you think it makes sense to at least verify that we get an http.StatusOK, or should I just remove all of it?
Checking StatusOK seems fine.
You mean to verify that the headers that actually get encoded are a superset of req.Header? Would it make sense to do this check in the server handler, or do you prefer that I cut out the RoundTrip and just call encodeHeaders/encodeTrailers directly and verify the results? Or do you have something else in mind?
I'm now second-guessing what I wrote :) We already have tests that verify the headers and trailers are returned as expected. IMO it's enough to verify that the request completes without an error (unless the headers or trailers are too big, in which case the request should fail).
Patch Set #12, Line 1603: // set for MaxHeaderBytes, above.
> Why should it be "close to" instead of "exactly"? […]
IMO it's fine to check MaxHeaderBytes + 320 exactly. I expect that constant will change very rarely, if ever.
Patch Set #12, Line 1631: checkRoundTrip(req, errRequestHeaderListSize, "Trailers over limit")
> I'm surprised this works? I'd expect to get the error after reading some of the response body, not […]
Ah thanks. I got mixed up and forgot this change added MAX_HEADER_LIST_SIZE to the *client* not the server. I also missed the comment at line 1488.
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
Mike Appleby uploaded patch set #13 to this change.
http2: Respect peer's SETTINGS_MAX_HEADER_LIST_SIZE in ClientConn
Add a new peerMaxHeaderListSize member to ClientConn which records the
SETTINGS_MAX_HEADER_LIST_SIZE requested by the client's peer, and
respect this limit in (*ClientConn) encodeHeaders / encodeTrailers.
Attempting to send more than peerMaxHeaderListSize bytes of headers or
trailers will result in RoundTrip returning errRequestHeaderListSize.
Updates golang/go#13959
Change-Id: Ic707179782acdf8ae543429ea1af7f4f30e67e59
---
M http2/transport.go
M http2/transport_test.go
2 files changed, 434 insertions(+), 73 deletions(-)
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
Benchsave results are here:
https://perf.golang.org/search?q=upload:20170825.2
My laptop is old, and it turns out my processor is overheating and getting throttled when running the benchmarks, which probably explains the unusually high variation in CPU stats.
$ benchsave http2-tip-57efc9c.txt http2-cl-29243-patch-set-13.txt
name old time/op new time/op delta
ServerGets-4 159µs ± 1% 158µs ± 1% -0.53% (p=0.000 n=234+228)
ServerPosts-4 223µs ±68% 239µs ±56% +7.30% (p=0.000 n=250+250)
ServerToClientStreamDefaultOptions-4 40.8µs ±58% 41.3µs ±56% ~ (p=0.067 n=250+250)
ServerToClientStreamReuseFrames-4 41.8µs ±53% 42.3µs ±53% ~ (p=0.168 n=250+250)
Server_GetRequest-4 224µs ±53% 225µs ±52% +0.45% (p=0.000 n=250+250)
Server_PostRequest-4 262µs ±42% 262µs ±40% ~ (p=0.687 n=250+250)
ClientRequestHeaders/___0_Headers-4 190µs ±57% 191µs ±58% +0.27% (p=0.000 n=250+250)
ClientRequestHeaders/__10_Headers-4 213µs ±59% 221µs ±58% +3.48% (p=0.000 n=250+250)
ClientRequestHeaders/_100_Headers-4 392µs ±52% 388µs ±16% -0.94% (p=0.000 n=250+250)
ClientRequestHeaders/1000_Headers-4 4.37ms ±17% 4.61ms ±16% +5.57% (p=0.000 n=250+250)
name old alloc/op new alloc/op delta
ServerGets-4 4.50kB ± 0% 4.50kB ± 0% +0.07% (p=0.000 n=245+215)
ServerPosts-4 5.45kB ± 0% 5.56kB ± 4% +2.03% (p=0.000 n=188+250)
ServerToClientStreamDefaultOptions-4 908B ±11% 907B ±11% ~ (p=0.394 n=250+250)
ServerToClientStreamReuseFrames-4 862B ±11% 864B ±11% ~ (p=0.984 n=250+250)
Server_GetRequest-4 4.55kB ± 1% 4.55kB ± 1% ~ (p=0.752 n=250+250)
Server_PostRequest-4 5.58kB ± 4% 5.58kB ± 3% ~ (p=0.354 n=250+250)
ClientRequestHeaders/___0_Headers-4 4.57kB ± 0% 4.62kB ± 0% +0.94% (p=0.000 n=250+250)
ClientRequestHeaders/__10_Headers-4 6.30kB ± 0% 6.34kB ± 0% +0.55% (p=0.000 n=250+250)
ClientRequestHeaders/_100_Headers-4 30.3kB ± 0% 30.4kB ± 0% +0.10% (p=0.000 n=250+250)
ClientRequestHeaders/1000_Headers-4 343kB ± 0% 343kB ± 0% +0.02% (p=0.002 n=250+250)
name old allocs/op new allocs/op delta
ServerGets-4 66.0 ± 0% 66.0 ± 0% ~ (all equal)
ServerPosts-4 79.0 ± 0% 79.0 ± 0% ~ (all equal)
ServerToClientStreamDefaultOptions-4 17.0 ± 0% 17.0 ± 0% ~ (all equal)
ServerToClientStreamReuseFrames-4 16.0 ± 0% 16.0 ± 0% ~ (all equal)
Server_GetRequest-4 62.0 ± 0% 62.0 ± 0% ~ (all equal)
Server_PostRequest-4 75.0 ± 0% 75.0 ± 0% ~ (all equal)
ClientRequestHeaders/___0_Headers-4 54.0 ± 0% 57.0 ± 0% +5.56% (p=0.000 n=250+250)
ClientRequestHeaders/__10_Headers-4 86.0 ± 0% 89.0 ± 0% +3.49% (p=0.000 n=250+250)
ClientRequestHeaders/_100_Headers-4 367 ± 0% 370 ± 0% +0.82% (p=0.000 n=250+250)
ClientRequestHeaders/1000_Headers-4 5.09k ± 0% 5.09k ± 0% +0.06% (p=0.000 n=250+250)
3 comments:
Sorry for the churn, I changed my mind ... can you change this to uint64 to avoid the allocation?
Done
tr := &Transport{TLSClientConfig: tlsConfigInsecure}
defer tr.CloseIdleConnections()
checkRoundTrip := func(req *http.Request, wantErr error, desc string) {
res, err := tr.RoundTrip(req)
if err != wantErr {
if res != nil {
res.Body.Close()
}
t.Errorf("%v: RoundTrip err = %v; want %v", desc, err, wantErr)
return
}
if err == nil {
if res == nil {
t.Errorf("%v: response nil; want non-nil.", desc)
> Do you think it makes sense to at least verify that we get an http. […]
Done
Patch Set #12, Line 1603: filler := strings.Repeat("*", 1024)
IMO it's fine to check MaxHeaderBytes + 320 exactly. I expect that constant will change very rarely, if ever.
Done
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
Sorry, one more nit. Thanks for your patience!
Patch set 13:Run-TryBot +1
2 comments:
Patch Set #13, Line 1260: 0xffffffffffffffff
nit: either make this a local constant to prevent typos, or remove this check. I'd remove this check -- in practice, all clients I know of will set this value during the initial SETTINGS, so I expect this optimization ~never runs in practice.
Patch Set #13, Line 1310: peerMaxHeaderListSize
ditto
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
TryBots beginning. Status page: https://farmer.golang.org/try?commit=05c0dead
TryBots are happy.
Patch set 13:TryBot-Result +1
1 comment:
Patch Set #13, Line 1260: 0xffffffffffffffff
nit: either make this a local constant to prevent typos, or remove this check. I'd remove this check -- in practice, all clients I know of will set this value during the initial SETTINGS, so I expect this optimization ~never runs in practice.
In that case, does it make sense to make peerMaxHeaderListSize a uint32 and use UINT32_MAX as the default? If we expect this value to be "unlimited" ~never and we expect clients to send more than UINT32_MAX bytes of headers ~never?
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
Patch Set 13:
(1 comment)
1 comment:
Patch Set #13, Line 1260: 0xffffffffffffffff
> nit: either make this a local constant to prevent typos, or remove this check. […]
Well, technically that's not infinite. (uint64max is infinite in practice because it's the maximum size of addressable memory.) I'd just do uint64
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
Mike Appleby uploaded patch set #14 to this change.
http2: Respect peer's SETTINGS_MAX_HEADER_LIST_SIZE in ClientConn
Add a new peerMaxHeaderListSize member to ClientConn which records the
SETTINGS_MAX_HEADER_LIST_SIZE requested by the client's peer, and
respect this limit in (*ClientConn) encodeHeaders / encodeTrailers.
Attempting to send more than peerMaxHeaderListSize bytes of headers or
trailers will result in RoundTrip returning errRequestHeaderListSize.
Updates golang/go#13959
Change-Id: Ic707179782acdf8ae543429ea1af7f4f30e67e59
---
M http2/transport.go
M http2/transport_test.go
2 files changed, 431 insertions(+), 74 deletions(-)
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
Patch set 14:Run-TryBot +1
TryBots beginning. Status page: https://farmer.golang.org/try?commit=a8876f4b
2 comments:
Patch Set #13, Line 1260: ders counting byte
Well, technically that's not infinite. […]
Done
Patch Set #13, Line 1310: , v := range vv {
ditto
Done
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.
TryBots are happy.
Patch set 14:TryBot-Result +1
Patch set 14:Code-Review +2
Tom Bergan merged this change.
http2: Respect peer's SETTINGS_MAX_HEADER_LIST_SIZE in ClientConn
Add a new peerMaxHeaderListSize member to ClientConn which records the
SETTINGS_MAX_HEADER_LIST_SIZE requested by the client's peer, and
respect this limit in (*ClientConn) encodeHeaders / encodeTrailers.
Attempting to send more than peerMaxHeaderListSize bytes of headers or
trailers will result in RoundTrip returning errRequestHeaderListSize.
Updates golang/go#13959
Change-Id: Ic707179782acdf8ae543429ea1af7f4f30e67e59
Reviewed-on: https://go-review.googlesource.com/29243
Run-TryBot: Tom Bergan <tomb...@google.com>
TryBot-Result: Gobot Gobot <go...@golang.org>
Reviewed-by: Tom Bergan <tomb...@google.com>
---
M http2/transport.go
M http2/transport_test.go
2 files changed, 431 insertions(+), 74 deletions(-)
diff --git a/http2/transport.go b/http2/transport.go
index 867e178..adb77ff 100644
--- a/http2/transport.go
+++ b/http2/transport.go
@@ -87,7 +87,7 @@
// MaxHeaderListSize is the http2 SETTINGS_MAX_HEADER_LIST_SIZE to
// send in the initial settings frame. It is how many bytes
- // of response headers are allow. Unlike the http2 spec, zero here
+ // of response headers are allowed. Unlike the http2 spec, zero here
// means to use a default limit (currently 10MB). If you actually
// want to advertise an ulimited value to the peer, Transport
// interprets the highest possible value here (0xffffffff or 1<<32-1)
@@ -172,9 +172,10 @@
fr *Framer
lastActive time.Time
// Settings from peer: (also guarded by mu)
- maxFrameSize uint32
- maxConcurrentStreams uint32
- initialWindowSize uint32
+ maxFrameSize uint32
+ maxConcurrentStreams uint32
+ peerMaxHeaderListSize uint64
+ initialWindowSize uint32
hbuf bytes.Buffer // HPACK encoder writes into this
henc *hpack.Encoder
@@ -519,17 +520,18 @@
func (t *Transport) newClientConn(c net.Conn, singleUse bool) (*ClientConn, error) {
cc := &ClientConn{
- t: t,
- tconn: c,
- readerDone: make(chan struct{}),
- nextStreamID: 1,
- maxFrameSize: 16 << 10, // spec default
- initialWindowSize: 65535, // spec default
- maxConcurrentStreams: 1000, // "infinite", per spec. 1000 seems good enough.
- streams: make(map[uint32]*clientStream),
- singleUse: singleUse,
- wantSettingsAck: true,
- pings: make(map[[8]byte]chan struct{}),
+ t: t,
+ tconn: c,
+ readerDone: make(chan struct{}),
+ nextStreamID: 1,
+ maxFrameSize: 16 << 10, // spec default
+ initialWindowSize: 65535, // spec default
+ maxConcurrentStreams: 1000, // "infinite", per spec. 1000 seems good enough.
+ peerMaxHeaderListSize: 0xffffffffffffffff, // "infinite", per spec. Use 2^64-1 instead.
+ streams: make(map[uint32]*clientStream),
+ singleUse: singleUse,
+ wantSettingsAck: true,
+ pings: make(map[[8]byte]chan struct{}),
}
if d := t.idleConnTimeout(); d != 0 {
cc.idleTimeout = d
@@ -1085,8 +1087,13 @@
var trls []byte
if hasTrailers {
cc.mu.Lock()
- defer cc.mu.Unlock()
- trls = cc.encodeTrailers(req)
+ trls, err = cc.encodeTrailers(req)
+ cc.mu.Unlock()
+ if err != nil {
+ cc.writeStreamReset(cs.ID, ErrCodeInternal, err)
+ cc.forgetStreamID(cs.ID)
+ return err
+ }
}
cc.wmu.Lock()
@@ -1189,62 +1196,86 @@
}
}
- // 8.1.2.3 Request Pseudo-Header Fields
- // The :path pseudo-header field includes the path and query parts of the
- // target URI (the path-absolute production and optionally a '?' character
- // followed by the query production (see Sections 3.3 and 3.4 of
- // [RFC3986]).
- cc.writeHeader(":authority", host)
- cc.writeHeader(":method", req.Method)
- if req.Method != "CONNECT" {
- cc.writeHeader(":path", path)
- cc.writeHeader(":scheme", req.URL.Scheme)
- }
- if trailers != "" {
- cc.writeHeader("trailer", trailers)
+ enumerateHeaders := func(f func(name, value string)) {
+ // 8.1.2.3 Request Pseudo-Header Fields
+ // The :path pseudo-header field includes the path and query parts of the
+ // target URI (the path-absolute production and optionally a '?' character
+ // followed by the query production (see Sections 3.3 and 3.4 of
+ // [RFC3986]).
+ f(":authority", host)
+ f(":method", req.Method)
+ if req.Method != "CONNECT" {
+ f(":path", path)
+ f(":scheme", req.URL.Scheme)
+ }
+ if trailers != "" {
+ f("trailer", trailers)
+ }
+
+ var didUA bool
+ for k, vv := range req.Header {
+ if strings.EqualFold(k, "host") || strings.EqualFold(k, "content-length") {
+ // Host is :authority, already sent.
+ // Content-Length is automatic, set below.
+ continue
+ } else if strings.EqualFold(k, "connection") || strings.EqualFold(k, "proxy-connection") ||
+ strings.EqualFold(k, "transfer-encoding") || strings.EqualFold(k, "upgrade") ||
+ strings.EqualFold(k, "keep-alive") {
+ // Per 8.1.2.2 Connection-Specific Header
+ // Fields, don't send connection-specific
+ // fields. We have already checked if any
+ // are error-worthy so just ignore the rest.
+ continue
+ } else if strings.EqualFold(k, "user-agent") {
+ // Match Go's http1 behavior: at most one
+ // User-Agent. If set to nil or empty string,
+ // then omit it. Otherwise if not mentioned,
+ // include the default (below).
+ didUA = true
+ if len(vv) < 1 {
+ continue
+ }
+ vv = vv[:1]
+ if vv[0] == "" {
+ continue
+ }
+
+ }
+
+ for _, v := range vv {
+ f(k, v)
+ }
+ }
+ if shouldSendReqContentLength(req.Method, contentLength) {
+ f("content-length", strconv.FormatInt(contentLength, 10))
+ }
+ if addGzipHeader {
+ f("accept-encoding", "gzip")
+ }
+ if !didUA {
+ f("user-agent", defaultUserAgent)
+ }
}
- var didUA bool
- for k, vv := range req.Header {
- lowKey := strings.ToLower(k)
- switch lowKey {
- case "host", "content-length":
- // Host is :authority, already sent.
- // Content-Length is automatic, set below.
- continue
- case "connection", "proxy-connection", "transfer-encoding", "upgrade", "keep-alive":
- // Per 8.1.2.2 Connection-Specific Header
- // Fields, don't send connection-specific
- // fields. We have already checked if any
- // are error-worthy so just ignore the rest.
- continue
- case "user-agent":
- // Match Go's http1 behavior: at most one
- // User-Agent. If set to nil or empty string,
- // then omit it. Otherwise if not mentioned,
- // include the default (below).
- didUA = true
- if len(vv) < 1 {
- continue
- }
- vv = vv[:1]
- if vv[0] == "" {
- continue
- }
- }
- for _, v := range vv {
- cc.writeHeader(lowKey, v)
- }
+ // Do a first pass over the headers counting bytes to ensure
+ // we don't exceed cc.peerMaxHeaderListSize. This is done as a
+ // separate pass before encoding the headers to prevent
+ // modifying the hpack state.
+ hlSize := uint64(0)
+ enumerateHeaders(func(name, value string) {
+ hf := hpack.HeaderField{Name: name, Value: value}
+ hlSize += uint64(hf.Size())
+ })
+
+ if hlSize > cc.peerMaxHeaderListSize {
+ return nil, errRequestHeaderListSize
}
- if shouldSendReqContentLength(req.Method, contentLength) {
- cc.writeHeader("content-length", strconv.FormatInt(contentLength, 10))
- }
- if addGzipHeader {
- cc.writeHeader("accept-encoding", "gzip")
- }
- if !didUA {
- cc.writeHeader("user-agent", defaultUserAgent)
- }
+
+ // Header list size is ok. Write the headers.
+ enumerateHeaders(func(name, value string) {
+ cc.writeHeader(strings.ToLower(name), value)
+ })
+
return cc.hbuf.Bytes(), nil
}
@@ -1271,17 +1302,29 @@
}
// requires cc.mu be held.
-func (cc *ClientConn) encodeTrailers(req *http.Request) []byte {
+func (cc *ClientConn) encodeTrailers(req *http.Request) ([]byte, error) {
cc.hbuf.Reset()
+
+ hlSize := uint64(0)
for k, vv := range req.Trailer {
- // Transfer-Encoding, etc.. have already been filter at the
+ for _, v := range vv {
+ hf := hpack.HeaderField{Name: k, Value: v}
+ hlSize += uint64(hf.Size())
+ }
+ }
+ if hlSize > cc.peerMaxHeaderListSize {
+ return nil, errRequestHeaderListSize
+ }
+
+ for k, vv := range req.Trailer {
+ // Transfer-Encoding, etc.. have already been filtered at the
// start of RoundTrip
lowKey := strings.ToLower(k)
for _, v := range vv {
cc.writeHeader(lowKey, v)
}
}
- return cc.hbuf.Bytes()
+ return cc.hbuf.Bytes(), nil
}
func (cc *ClientConn) writeHeader(name, value string) {
@@ -1911,6 +1954,8 @@
cc.maxFrameSize = s.Val
case SettingMaxConcurrentStreams:
cc.maxConcurrentStreams = s.Val
+ case SettingMaxHeaderListSize:
+ cc.peerMaxHeaderListSize = uint64(s.Val)
case SettingInitialWindowSize:
// Values above the maximum flow-control
// window size of 2^31-1 MUST be treated as a
@@ -2077,6 +2122,7 @@
var (
errResponseHeaderListSize = errors.New("http2: response header list larger than advertised limit")
+ errRequestHeaderListSize = errors.New("http2: request header list larger than peer's advertised limit")
errPseudoTrailers = errors.New("http2: invalid pseudo header in trailers")
)
diff --git a/http2/transport_test.go b/http2/transport_test.go
index 2e727a8..0126ff4 100644
--- a/http2/transport_test.go
+++ b/http2/transport_test.go
@@ -1371,6 +1371,269 @@
ct.run()
}
+// headerListSize returns the HTTP2 header list size of h.
+// http://httpwg.org/specs/rfc7540.html#SETTINGS_MAX_HEADER_LIST_SIZE
+// http://httpwg.org/specs/rfc7540.html#MaxHeaderBlock
+func headerListSize(h http.Header) (size uint32) {
+ for k, vv := range h {
+ for _, v := range vv {
+ hf := hpack.HeaderField{Name: k, Value: v}
+ size += hf.Size()
+ }
+ }
+ return size
+}
+
+// padHeaders adds data to an http.Header until headerListSize(h) ==
+// limit. Due to the way header list sizes are calculated, padHeaders
+// cannot add fewer than len("Pad-Headers") + 32 bytes to h, and will
+// call t.Fatal if asked to do so. PadHeaders first reserves enough
+// space for an empty "Pad-Headers" key, then adds as many copies of
+// filler as possible. Any remaining bytes necessary to push the
+// header list size up to limit are added to h["Pad-Headers"].
+func padHeaders(t *testing.T, h http.Header, limit uint64, filler string) {
+ if limit > 0xffffffff {
+ t.Fatalf("padHeaders: refusing to pad to more than 2^32-1 bytes. limit = %v", limit)
+ }
+ hf := hpack.HeaderField{Name: "Pad-Headers", Value: ""}
+ minPadding := uint64(hf.Size())
+ size := uint64(headerListSize(h))
+
+ minlimit := size + minPadding
+ if limit < minlimit {
+ t.Fatalf("padHeaders: limit %v < %v", limit, minlimit)
+ }
+
+ // Use a fixed-width format for name so that fieldSize
+ // remains constant.
+ nameFmt := "Pad-Headers-%06d"
+ hf = hpack.HeaderField{Name: fmt.Sprintf(nameFmt, 1), Value: filler}
+ fieldSize := uint64(hf.Size())
+
+ // Add as many complete filler values as possible, leaving
+ // room for at least one empty "Pad-Headers" key.
+ limit = limit - minPadding
+ for i := 0; size+fieldSize < limit; i++ {
+ name := fmt.Sprintf(nameFmt, i)
+ h.Add(name, filler)
+ size += fieldSize
+ }
+
+ // Add enough bytes to reach limit.
+ remain := limit - size
+ lastValue := strings.Repeat("*", int(remain))
+ h.Add("Pad-Headers", lastValue)
+}
+
+func TestPadHeaders(t *testing.T) {
+ check := func(h http.Header, limit uint32, fillerLen int) {
+ if h == nil {
+ h = make(http.Header)
+ }
+ filler := strings.Repeat("f", fillerLen)
+ padHeaders(t, h, uint64(limit), filler)
+ gotSize := headerListSize(h)
+ if gotSize != limit {
+ t.Errorf("Got size = %v; want %v", gotSize, limit)
+ }
+ }
+ // Try all possible combinations for small fillerLen and limit.
+ hf := hpack.HeaderField{Name: "Pad-Headers", Value: ""}
+ minLimit := hf.Size()
+ for limit := minLimit; limit <= 128; limit++ {
+ for fillerLen := 0; uint32(fillerLen) <= limit; fillerLen++ {
+ check(nil, limit, fillerLen)
+ }
+ }
+
+ // Try a few tests with larger limits, plus cumulative
+ // tests. Since these tests are cumulative, tests[i+1].limit
+ // must be >= tests[i].limit + minLimit. See the comment on
+ // padHeaders for more info on why the limit arg has this
+ // restriction.
+ tests := []struct {
+ fillerLen int
+ limit uint32
+ }{
+ {
+ fillerLen: 64,
+ limit: 1024,
+ },
+ {
+ fillerLen: 1024,
+ limit: 1286,
+ },
+ {
+ fillerLen: 256,
+ limit: 2048,
+ },
+ {
+ fillerLen: 1024,
+ limit: 10 * 1024,
+ },
+ {
+ fillerLen: 1023,
+ limit: 11 * 1024,
+ },
+ }
+ h := make(http.Header)
+ for _, tc := range tests {
+ check(nil, tc.limit, tc.fillerLen)
+ check(h, tc.limit, tc.fillerLen)
+ }
+}
+
+func TestTransportChecksRequestHeaderListSize(t *testing.T) {
+ st := newServerTester(t,
+ func(w http.ResponseWriter, r *http.Request) {
+ // Consume body & force client to send
+ // trailers before writing response.
+ // ioutil.ReadAll returns non-nil err for
+ // requests that attempt to send greater than
+ // maxHeaderListSize bytes of trailers, since
+ // those requests generate a stream reset.
+ ioutil.ReadAll(r.Body)
+ r.Body.Close()
+ },
+ func(ts *httptest.Server) {
+ ts.Config.MaxHeaderBytes = 16 << 10
+ },
+ optOnlyServer,
+ optQuiet,
+ )
+ defer st.Close()
+
+ tr := &Transport{TLSClientConfig: tlsConfigInsecure}
+ defer tr.CloseIdleConnections()
+
+ checkRoundTrip := func(req *http.Request, wantErr error, desc string) {
+ res, err := tr.RoundTrip(req)
+ if err != wantErr {
+ if res != nil {
+ res.Body.Close()
+ }
+ t.Errorf("%v: RoundTrip err = %v; want %v", desc, err, wantErr)
+ return
+ }
+ if err == nil {
+ if res == nil {
+ t.Errorf("%v: response nil; want non-nil.", desc)
+ return
+ }
+ defer res.Body.Close()
+ if res.StatusCode != http.StatusOK {
+ t.Errorf("%v: response status = %v; want %v", desc, res.StatusCode, http.StatusOK)
+ }
+ return
+ }
+ if res != nil {
+ t.Errorf("%v: RoundTrip err = %v but response non-nil", desc, err)
+ }
+ }
+ headerListSizeForRequest := func(req *http.Request) (size uint64) {
+ contentLen := actualContentLength(req)
+ trailers, err := commaSeparatedTrailers(req)
+ if err != nil {
+ t.Fatalf("headerListSizeForRequest: %v", err)
+ }
+ cc := &ClientConn{peerMaxHeaderListSize: 0xffffffffffffffff}
+ cc.henc = hpack.NewEncoder(&cc.hbuf)
+ cc.mu.Lock()
+ hdrs, err := cc.encodeHeaders(req, true, trailers, contentLen)
+ cc.mu.Unlock()
+ if err != nil {
+ t.Fatalf("headerListSizeForRequest: %v", err)
+ }
+ hpackDec := hpack.NewDecoder(initialHeaderTableSize, func(hf hpack.HeaderField) {
+ size += uint64(hf.Size())
+ })
+ if len(hdrs) > 0 {
+ if _, err := hpackDec.Write(hdrs); err != nil {
+ t.Fatalf("headerListSizeForRequest: %v", err)
+ }
+ }
+ return size
+ }
+ // Create a new Request for each test, rather than reusing the
+ // same Request, to avoid a race when modifying req.Headers.
+ // See https://github.com/golang/go/issues/21316
+ newRequest := func() *http.Request {
+ // Body must be non-nil to enable writing trailers.
+ body := strings.NewReader("hello")
+ req, err := http.NewRequest("POST", st.ts.URL, body)
+ if err != nil {
+ t.Fatalf("newRequest: NewRequest: %v", err)
+ }
+ return req
+ }
+
+ // Make an arbitrary request to ensure we get the server's
+ // settings frame and initialize peerMaxHeaderListSize.
+ req := newRequest()
+ checkRoundTrip(req, nil, "Initial request")
+
+ // Get the ClientConn associated with the request and validate
+ // peerMaxHeaderListSize.
+ addr := authorityAddr(req.URL.Scheme, req.URL.Host)
+ cc, err := tr.connPool().GetClientConn(req, addr)
+ if err != nil {
+ t.Fatalf("GetClientConn: %v", err)
+ }
+ cc.mu.Lock()
+ peerSize := cc.peerMaxHeaderListSize
+ cc.mu.Unlock()
+ st.scMu.Lock()
+ wantSize := uint64(st.sc.maxHeaderListSize())
+ st.scMu.Unlock()
+ if peerSize != wantSize {
+ t.Errorf("peerMaxHeaderListSize = %v; want %v", peerSize, wantSize)
+ }
+
+ // Sanity check peerSize. (*serverConn) maxHeaderListSize adds
+ // 320 bytes of padding.
+ wantHeaderBytes := uint64(st.ts.Config.MaxHeaderBytes) + 320
+ if peerSize != wantHeaderBytes {
+ t.Errorf("peerMaxHeaderListSize = %v; want %v.", peerSize, wantHeaderBytes)
+ }
+
+ // Pad headers & trailers, but stay under peerSize.
+ req = newRequest()
+ req.Header = make(http.Header)
+ req.Trailer = make(http.Header)
+ filler := strings.Repeat("*", 1024)
+ padHeaders(t, req.Trailer, peerSize, filler)
+ // cc.encodeHeaders adds some default headers to the request,
+ // so we need to leave room for those.
+ defaultBytes := headerListSizeForRequest(req)
+ padHeaders(t, req.Header, peerSize-defaultBytes, filler)
+ checkRoundTrip(req, nil, "Headers & Trailers under limit")
+
+ // Add enough header bytes to push us over peerSize.
+ req = newRequest()
+ req.Header = make(http.Header)
+ padHeaders(t, req.Header, peerSize, filler)
+ checkRoundTrip(req, errRequestHeaderListSize, "Headers over limit")
+
+ // Push trailers over the limit.
+ req = newRequest()
+ req.Trailer = make(http.Header)
+ padHeaders(t, req.Trailer, peerSize+1, filler)
+ checkRoundTrip(req, errRequestHeaderListSize, "Trailers over limit")
+
+ // Send headers with a single large value.
+ req = newRequest()
+ filler = strings.Repeat("*", int(peerSize))
+ req.Header = make(http.Header)
+ req.Header.Set("Big", filler)
+ checkRoundTrip(req, errRequestHeaderListSize, "Single large header")
+
+ // Send trailers with a single large value.
+ req = newRequest()
+ req.Trailer = make(http.Header)
+ req.Trailer.Set("Big", filler)
+ checkRoundTrip(req, errRequestHeaderListSize, "Single large trailer")
+}
+
func TestTransportChecksResponseHeaderListSize(t *testing.T) {
ct := newClientTester(t)
ct.client = func() error {
@@ -2663,7 +2926,7 @@
},
}
for i, tt := range tests {
- cc := &ClientConn{}
+ cc := &ClientConn{peerMaxHeaderListSize: 0xffffffffffffffff}
cc.henc = hpack.NewEncoder(&cc.hbuf)
cc.mu.Lock()
hdrs, err := cc.encodeHeaders(tt.req, false, "", -1)
@@ -3373,3 +3636,51 @@
}
ct.run()
}
+
+func benchSimpleRoundTrip(b *testing.B, nHeaders int) {
+ defer disableGoroutineTracking()()
+ b.ReportAllocs()
+ st := newServerTester(b,
+ func(w http.ResponseWriter, r *http.Request) {
+ },
+ optOnlyServer,
+ optQuiet,
+ )
+ defer st.Close()
+
+ tr := &Transport{TLSClientConfig: tlsConfigInsecure}
+ defer tr.CloseIdleConnections()
+
+ req, err := http.NewRequest("GET", st.ts.URL, nil)
+ if err != nil {
+ b.Fatal(err)
+ }
+
+ for i := 0; i < nHeaders; i++ {
+ name := fmt.Sprint("A-", i)
+ req.Header.Set(name, "*")
+ }
+
+ b.ResetTimer()
+
+ for i := 0; i < b.N; i++ {
+ res, err := tr.RoundTrip(req)
+ if err != nil {
+ if res != nil {
+ res.Body.Close()
+ }
+ b.Fatalf("RoundTrip err = %v; want nil", err)
+ }
+ res.Body.Close()
+ if res.StatusCode != http.StatusOK {
+ b.Fatalf("Response code = %v; want %v", res.StatusCode, http.StatusOK)
+ }
+ }
+}
+
+func BenchmarkClientRequestHeaders(b *testing.B) {
+ b.Run(" 0 Headers", func(b *testing.B) { benchSimpleRoundTrip(b, 0) })
+ b.Run(" 10 Headers", func(b *testing.B) { benchSimpleRoundTrip(b, 10) })
+ b.Run(" 100 Headers", func(b *testing.B) { benchSimpleRoundTrip(b, 100) })
+ b.Run("1000 Headers", func(b *testing.B) { benchSimpleRoundTrip(b, 1000) })
+}
To view, visit change 29243. To unsubscribe, or for help writing mail filters, visit settings.