Re: [kubernetes/kubernetes] ServerTimeout when there are a huge amount of requests on a specific resource (#45811)

1 view
Skip to first unread message

Timothy St. Clair

unread,
May 15, 2017, 9:40:03 AM5/15/17
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/cc @kubernetes/sig-api-machinery-bugs


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Wojciech Tyczynski

unread,
May 15, 2017, 11:07:14 AM5/15/17
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

I don't think this is necessary a bug - this may be working as intended.

If you have 500 pods, each of the generating 10qps of "get configmap), this gives us 5000qps. Depending on the size of the master machine/number of apiservers and/or etcd instances, this just may be too many requests to handle.

zhouhaibing089

unread,
May 15, 2017, 11:09:54 AM5/15/17
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

@wojtek-t Not actually the case, I think I need to emphasis it a little bit:

  1. only one resource affected.
  2. the timeout did not stop after the test finished.

Daniel Smith

unread,
May 26, 2017, 6:08:01 PM5/26/17
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

please look at and/or paste here the apiserver logs.

keyingliu

unread,
Jun 27, 2017, 8:39:12 AM6/27/17
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

We updated the grpc package whose version is same as what kubernetes 1.7 is used, the hung has gone.

Armstrong Li

unread,
Jun 29, 2017, 9:21:43 AM6/29/17
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

@zhouhaibing089 @keyingliu
grpc/grpc-java#2258
there is one dead lock bug in grpc 1.0.0.

Matt Liggett

unread,
Aug 21, 2017, 4:57:35 PM8/21/17
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/assign

fejta-bot

unread,
Jan 3, 2018, 6:22:32 AM1/3/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

zhouhaibing089

unread,
Feb 5, 2018, 2:45:15 AM2/5/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/remove-lifecycle stale

zhouhaibing089

unread,
Feb 5, 2018, 2:47:09 AM2/5/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Below is the goroutine profile based on k8s 1.6.3.

goroutine profile: total 49531
20276 @ 0x42ce8a 0x43c5d5 0x43b36c 0x294b926 0x293a799 0x25b3f05 0x25b4a12 0x25cd6e6 0x25b42fd 0x255c692 0x2233292 0x222c55f 0x222c3b7 0x2224388 0x221d67e 0x221cf5d 0x221ca62 0x220bf92 0xf7d136 0xf8ac74 0x225a357 0x2259f70 0x223abbc 0xfc4bf0 0xfd3048 0x603bc4 0x6050bf 0x102a205 0x603bc4 0x102c10d 0x603bc4 0x1029792
#	0x294b925	k8s.io/kubernetes/vendor/google.golang.org/grpc/transport.wait+0x445										/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/transport/transport.go:577
#	0x293a798	k8s.io/kubernetes/vendor/google.golang.org/grpc/transport.(*http2Client).NewStream+0x658							/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/transport/http2_client.go:319
#	0x25b3f04	k8s.io/kubernetes/vendor/google.golang.org/grpc.sendRequest+0x94										/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/call.go:80
#	0x25b4a11	k8s.io/kubernetes/vendor/google.golang.org/grpc.invoke+0x621											/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/call.go:191
#	0x25cd6e5	k8s.io/kubernetes/vendor/github.com/grpc-ecosystem/go-grpc-prometheus.(*ClientMetrics).UnaryClientInterceptor.func1+0x125			/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/grpc-ecosystem/go-grpc-prometheus/client_metrics.go:84
#	0x25b42fc	k8s.io/kubernetes/vendor/google.golang.org/grpc.Invoke+0xdc											/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/call.go:116
#	0x255c691	k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver/etcdserverpb.(*kVClient).Range+0xd1							/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/etcdserver/etcdserverpb/rpc.pb.go:2203
#	0x2233291	k8s.io/kubernetes/vendor/github.com/coreos/etcd/clientv3.(*retryWriteKVClient).Range+0x91							<autogenerated>:174
#	0x222c55e	k8s.io/kubernetes/vendor/github.com/coreos/etcd/clientv3.(*retryKVClient).Range.func1+0x8e							/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/clientv3/retry.go:92
#	0x222c3b6	k8s.io/kubernetes/vendor/github.com/coreos/etcd/clientv3.(*Client).newAuthRetryWrapper.func1+0x46						/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/clientv3/retry.go:61
#	0x2224387	k8s.io/kubernetes/vendor/github.com/coreos/etcd/clientv3.(*retryKVClient).Range+0x157								/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/clientv3/retry.go:94
#	0x221d67d	k8s.io/kubernetes/vendor/github.com/coreos/etcd/clientv3.(*kv).do+0x4ed										/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/clientv3/kv.go:145
#	0x221cf5c	k8s.io/kubernetes/vendor/github.com/coreos/etcd/clientv3.(*kv).Do+0x7c										/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/clientv3/kv.go:124
#	0x221ca61	k8s.io/kubernetes/vendor/github.com/coreos/etcd/clientv3.(*kv).Get+0xe1										/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/coreos/etcd/clientv3/kv.go:98
#	0x220bf91	k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/storage/etcd3.(*store).Get+0x131									/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/storage/etcd3/store.go:128
#	0xf7d135	k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/storage.(*Cacher).Get+0xc5									/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/storage/cacher.go:360
#	0xf8ac73	k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/registry/generic/registry.(*Store).Get+0x183							/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/registry/generic/registry/store.go:517
#	0x225a356	k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.GetResource.func1+0x1f6							/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/rest.go:162
#	0x2259f6f	k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.getResourceHandler.func1+0x19f							/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/rest.go:123
#	0x223abbb	k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.InstrumentRouteFunc.func1+0x1eb							/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:104
#	0xfc4bef	k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch+0xb9f								/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:272
#	0xfd3047	k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).(k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.dispatch)-fm+0x47	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:120
#	0x603bc3	net/http.HandlerFunc.ServeHTTP+0x43														/usr/local/go/src/net/http/server.go:1726
#	0x6050be	net/http.(*ServeMux).ServeHTTP+0x7e														/usr/local/go/src/net/http/server.go:2022
#	0x102a204	k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1+0x364							/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:50
#	0x603bc3	net/http.HandlerFunc.ServeHTTP+0x43														/usr/local/go/src/net/http/server.go:1726
#	0x102c10c	k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1+0x1e4c							/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:47
#	0x603bc3	net/http.HandlerFunc.ServeHTTP+0x43														/usr/local/go/src/net/http/server.go:1726
#	0x1029791	k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAudit.func1+0x911								/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/audit.go:137

zhouhaibing089

unread,
Feb 5, 2018, 3:51:15 AM2/5/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Looks like exactly the same as #57061

fejta-bot

unread,
May 6, 2018, 5:26:51 AM5/6/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

zhouhaibing089

unread,
May 6, 2018, 8:26:44 AM5/6/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/remove-lifecycle stale

fejta-bot

unread,
Aug 4, 2018, 8:48:22 AM8/4/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale

Nikhita Raghunath

unread,
Aug 10, 2018, 10:32:41 AM8/10/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/remove-lifecycle stale

Ijaz ahmad

unread,
Aug 14, 2018, 10:49:59 AM8/14/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Same issue

kubectl create -f kubia-liveness.yml 
Error from server (ServerTimeout): error when creating "kubia-liveness.yml": the server cannot complete the requested operation at this time, try again later (post pods)

fejta-bot

unread,
Nov 12, 2018, 10:42:05 AM11/12/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot

unread,
Dec 12, 2018, 11:26:37 AM12/12/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle rotten

fejta-bot

unread,
Jan 11, 2019, 12:11:28 PM1/11/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.


Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Kubernetes Prow Robot

unread,
Jan 11, 2019, 12:11:45 PM1/11/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Kubernetes Prow Robot

unread,
Jan 11, 2019, 12:11:48 PM1/11/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Closed #45811.

Nolan Woods

unread,
Mar 26, 2021, 8:06:29 PM3/26/21
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/reopen


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

Kubernetes Prow Robot

unread,
Mar 26, 2021, 8:06:36 PM3/26/21
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

@innovate-invent: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Reply all
Reply to author
Forward
0 new messages