@kubernetes/sig-api-machinery-bugs
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
@dims new to the community, so I'm not sure of processes here -- is the gcp sig meant to imply gcp-only-bugs? I've also been able to reproduce this on Docker for Mac.
@TheKevJames you mean you used kubectl on mac right? (still talking to apiservers and kubelet on GKE)
No, to clarify I have reproduced with Docker for Mac --> enable Kubernetes locally, point kubectl to localhost, apply the reproduction case there.
When reproducing locally, I get the following apiserver logs:
$ docker ps | grep api | head -n1
1817b75324a3 docker/kube-compose-api-server "/api-server --kubec…" 3 minutes ago Up 3 minutes k8s_compose_compose-api-7bb7b5968f-86c4g_docker_b6e9de31-4fcf-11e8-9431-025000000001_0
$ docker logs 1817b75324a3
ERROR: logging before flag.Parse: W0508 17:03:54.289979 1 client_config.go:529] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
ERROR: logging before flag.Parse: I0508 17:03:54.314585 1 serve.go:85] Serving securely on 0.0.0.0:9443
ERROR: logging before flag.Parse: E0508 17:05:24.164918 1 webhook.go:187] Failed to make webhook authorizer request: Post https://10.96.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews: unexpected EOF
ERROR: logging before flag.Parse: E0508 17:05:24.174055 1 errors.go:90] Post https://10.96.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews: unexpected EOF
ERROR: logging before flag.Parse: E0508 17:06:18.359667 1 webhook.go:187] Failed to make webhook authorizer request: Post https://10.96.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews: unexpected EOF
ERROR: logging before flag.Parse: E0508 17:06:18.360648 1 errors.go:90] Post https://10.96.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews: unexpected EOF
/remove-sig gcp
The is the root cause I think: kubernetes/kube-openapi#66
/assign @mbohlool
@mbohlool looks like the openapi fix has not changed anything on my side. Happy to post more logs if need be, but it looks like everything is still returning the same result.
Ah, I spoke too soon, got a bit different output from the gcr.io/google_containers/kube-apiserver-amd64 container this time around:
$ docker logs b6d6ddc8d075
I0604 19:57:37.825479 1 server.go:121] Version: v1.9.6
I0604 19:57:38.367769 1 feature_gate.go:190] feature gates: map[Initializers:true]
I0604 19:57:38.367865 1 initialization.go:90] enabled Initializers feature as part of admission plugin setup
I0604 19:57:38.371354 1 master.go:225] Using reconciler: master-count
W0604 19:57:38.470959 1 genericapiserver.go:342] Skipping API batch/v2alpha1 because it has no resources.
W0604 19:57:38.480921 1 genericapiserver.go:342] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0604 19:57:38.482015 1 genericapiserver.go:342] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0604 19:57:38.504572 1 genericapiserver.go:342] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
[restful] 2018/06/04 19:57:38 log.go:33: [restful/swagger] listing is available at https://192.168.65.3:6443/swaggerapi
[restful] 2018/06/04 19:57:38 log.go:33: [restful/swagger] https://192.168.65.3:6443/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2018/06/04 19:57:39 log.go:33: [restful/swagger] listing is available at https://192.168.65.3:6443/swaggerapi
[restful] 2018/06/04 19:57:39 log.go:33: [restful/swagger] https://192.168.65.3:6443/swaggerui/ is mapped to folder /swagger-ui/
I0604 19:57:41.390127 1 serve.go:89] Serving securely on [::]:6443
I0604 19:57:41.390355 1 apiservice_controller.go:112] Starting APIServiceRegistrationController
I0604 19:57:41.390380 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0604 19:57:41.390703 1 controller.go:84] Starting OpenAPI AggregationController
I0604 19:57:41.391625 1 available_controller.go:262] Starting AvailableConditionController
I0604 19:57:41.391684 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0604 19:57:41.393558 1 crd_finalizer.go:242] Starting CRDFinalizer
I0604 19:57:41.401010 1 customresource_discovery_controller.go:152] Starting DiscoveryController
I0604 19:57:41.401075 1 naming_controller.go:274] Starting NamingConditionController
I0604 19:57:41.409600 1 crdregistration_controller.go:110] Starting crd-autoregister controller
I0604 19:57:41.409677 1 controller_utils.go:1019] Waiting for caches to sync for crd-autoregister controller
I0604 19:57:41.490571 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0604 19:57:41.497738 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0604 19:57:41.515861 1 controller_utils.go:1026] Caches are synced for crd-autoregister controller
I0604 19:57:41.515923 1 autoregister_controller.go:136] Starting autoregister controller
I0604 19:57:41.516149 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0604 19:57:41.616825 1 cache.go:39] Caches are synced for autoregister controller
I0604 19:57:42.459589 1 trace.go:76] Trace[1507878958]: "GuaranteedUpdate etcd3: *core.Pod" (started: 2018-06-04 19:57:41.454891905 +0000 UTC m=+3.722894185) (total time: 1.004607621s):
Trace[1507878958]: [1.001219035s] [1.001142295s] Transaction prepared
I0604 19:57:42.459910 1 trace.go:76] Trace[1672314933]: "Update /api/v1/namespaces/kube-system/pods/kube-apiserver-docker-for-desktop/status" (started: 2018-06-04 19:57:41.454748913 +0000 UTC m=+3.722751182) (total time: 1.005137605
s):
Trace[1672314933]: [1.004941968s] [1.004843589s] Object stored in database
I0604 19:57:42.481132 1 trace.go:76] Trace[1746848947]: "GuaranteedUpdate etcd3: *core.Event" (started: 2018-06-04 19:57:41.463771958 +0000 UTC m=+3.731774237) (total time: 1.017337507s):
Trace[1746848947]: [1.01634977s] [1.005561411s] Transaction prepared
I0604 19:57:43.337819 1 controller.go:538] quota admission added evaluator for: { endpoints}
I0604 19:57:44.072456 1 controller.go:105] OpenAPI AggregationController: Processing item v1beta1.custom.metrics.k8s.io
I0604 19:57:45.546360 1 trace.go:76] Trace[2118055391]: "Create /apis/authorization.k8s.io/v1beta1/subjectaccessreviews" (started: 2018-06-04 19:57:41.540119368 +0000 UTC m=+3.808121636) (total time: 4.006201578s):
Trace[2118055391]: [4.004022562s] [4.003356687s] About to store object in database
I0604 19:57:45.550560 1 trace.go:76] Trace[1189816189]: "Create /apis/authorization.k8s.io/v1beta1/subjectaccessreviews" (started: 2018-06-04 19:57:41.548526068 +0000 UTC m=+3.816528330) (total time: 4.001997625s):
Trace[1189816189]: [4.001747743s] [4.001689269s] About to store object in database
E0604 19:57:45.883713 1 runtime.go:66] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/asm_amd64.s:509
/usr/local/go/src/runtime/panic.go:491
/usr/local/go/src/runtime/panic.go:63
/usr/local/go/src/runtime/signal_unix.go:367
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-openapi/pkg/aggregator/aggregator.go:145
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-openapi/pkg/aggregator/aggregator.go:50
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-openapi/pkg/aggregator/aggregator.go:160
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-openapi/pkg/aggregator/aggregator.go:177
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/controllers/openapi/aggregator.go:264
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/controllers/openapi/controller.go:141
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/controllers/openapi/controller.go:74
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/controllers/openapi/controller.go:107
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/controllers/openapi/controller.go:93
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/controllers/openapi/controller.go:87
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/usr/local/go/src/runtime/asm_amd64.s:2337
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x1506855]
goroutine 1581 [running]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x111
panic(0x3387ec0, 0x8fe35e0)
/usr/local/go/src/runtime/panic.go:491 +0x283
k8s.io/kubernetes/vendor/k8s.io/kube-openapi/pkg/aggregator.(*referenceWalker).Start(0xc42a9f2540)
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-openapi/pkg/aggregator/aggregator.go:145 +0x45
k8s.io/kubernetes/vendor/k8s.io/kube-openapi/pkg/aggregator.walkOnAllReferences(0xc4279a8000, 0xc42d4ca300)
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-openapi/pkg/aggregator/aggregator.go:50 +0x7d
k8s.io/kubernetes/vendor/k8s.io/kube-openapi/pkg/aggregator.usedDefinitionForSpec(0xc42d4ca300, 0xc42a9f2a28)
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-openapi/pkg/aggregator/aggregator.go:160 +0xa1
k8s.io/kubernetes/vendor/k8s.io/kube-openapi/pkg/aggregator.FilterSpecByPaths(0xc42d4ca300, 0xc4226e1cb0, 0x1, 0x1)
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-openapi/pkg/aggregator/aggregator.go:177 +0x43
k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/controllers/openapi.(*specAggregator).UpdateAPIServiceSpec(0xc424afd260, 0xc42a454c40, 0x1d, 0xc42d4ca300, 0xc42c23a160, 0xa3, 0x0, 0x0)
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/controllers/openapi/aggregator.go:264 +0x2b2
k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/controllers/openapi.(*AggregationController).sync(0xc425755770, 0xc42a454c40, 0x1d, 0xc42642fde0, 0x1, 0x1)
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/controllers/openapi/controller.go:141 +0x1d7
k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/controllers/openapi.(*AggregationController).(k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/controllers/openapi.sync)-fm(0xc42a454c40, 0x1d, 0xc42642fde0, 0x1, 0x1)
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/controllers/openapi/controller.go:74 +0x3e
k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/controllers/openapi.(*AggregationController).processNextWorkItem(0xc425755770, 0x956a00)
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/controllers/openapi/controller.go:107 +0x14f
k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/controllers/openapi.(*AggregationController).runWorker(0xc425755770)
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/controllers/openapi/controller.go:93 +0x2b
k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/controllers/openapi.(*AggregationController).(k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/controllers/openapi.runWorker)-fm()
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/controllers/openapi/controller.go:87 +0x2a
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc427b16a00)
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x5e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc427b16a00, 0x3b9aca00, 0x0, 0x45d301, 0xc42006a180)
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xbd
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc427b16a00, 0x3b9aca00, 0xc42006a180)
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/controllers/openapi.(*AggregationController).Run
/workspace/anago-v1.9.6-beta.0.17+9f8ebd171479be/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/controllers/openapi/controller.go:87 +0x179
@TheKevJames The change goes into 1.9.8 patch: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md/#other-notable-changes-2
look at item #63626
I see the log you reference has 1.9.6 server, can you try this with 1.9.8+ server? it should have been fixed, if it is not, let me know.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
@mbohlool Sorry for the late response on this one. I can no longer reproduce this bug on version 1.11.2-gke.18. Thanks!
Closed #63494.