I1123 00:16:53.276839 19717 controller.go:538] quota admission added evaluator for: {mygroup.example.com foocs2sfas}
I1123 00:16:58.402616 19717 garbagecollector.go:184] no resource updates from discovery, skipping garbage collector sync
I1123 00:17:03.428298 19717 garbagecollector.go:184] no resource updates from discovery, skipping garbage collector sync
I1123 00:17:08.451808 19717 garbagecollector.go:184] no resource updates from discovery, skipping garbage collector sync
I1123 00:17:13.474798 19717 garbagecollector.go:184] no resource updates from discovery, skipping garbage collector sync
I1123 00:17:18.498159 19717 garbagecollector.go:184] no resource updates from discovery, skipping garbage collector sync
I1123 00:17:23.525179 19717 garbagecollector.go:184] no resource updates from discovery, skipping garbage collector sync
I1123 00:17:28.548369 19717 garbagecollector.go:184] no resource updates from discovery, skipping garbage collector sync
I1123 00:17:33.571418 19717 garbagecollector.go:184] no resource updates from discovery, skipping garbage collector sync
I1123 00:17:38.593785 19717 garbagecollector.go:184] no resource updates from discovery, skipping garbage collector sync
I1123 00:17:43.616819 19717 garbagecollector.go:184] no resource updates from discovery, skipping garbage collector sync
I1123 00:17:48.641174 19717 garbagecollector.go:184] no resource updates from discovery, skipping garbage collector sync
I1123 00:17:53.293108 19717 garbagecollector.go:154] Shutting down garbage collector controller
I1123 00:17:53.293601 19717 graph_builder.go:348] stopped 42 of 42 monitors
I1123 00:17:53.293636 19717 graph_builder.go:349] GraphBuilder stopping
I1123 00:17:53.305514 19717 controller.go:90] Shutting down OpenAPI AggregationController
2017-11-23 00:17:53.305553 I | integration: terminating 1768666837956983742 (unix://localhost:17686668379569837420)
I1123 00:17:53.305611 19717 autoregister_controller.go:160] Shutting down autoregister controller
I1123 00:17:53.305675 19717 crd_finalizer.go:254] Shutting down CRDFinalizer
I1123 00:17:53.305629 19717 apiservice_controller.go:124] Shutting down APIServiceRegistrationController
I1123 00:17:53.305729 19717 available_controller.go:274] Shutting down AvailableConditionController
I1123 00:17:53.305683 19717 serve.go:129] Stopped listening on [::]:42059
I1123 00:17:53.305662 19717 customresource_discovery_controller.go:163] Shutting down DiscoveryController
I1123 00:17:53.305651 19717 crdregistration_controller.go:139] Shutting down crd-autoregister controller
I1123 00:17:53.305720 19717 naming_controller.go:285] Shutting down NamingConditionController
2017-11-23 00:17:53.351340 I | integration: terminated 1768666837956983742 (unix://localhost:17686668379569837420)
--- FAIL: TestCustomResourceCascadingDeletion (66.08s)
testserver.go:72: Starting etcd...
testserver.go:95: Starting kube-apiserver on port 42059...
testserver.go:106: Waiting for /healthz to be ok...
garbage_collector_test.go:827: created owner resource "ownerh27bs"
garbage_collector_test.go:837: created dependent resource "dependentfhlpm"
garbage_collector_test.go:851: failed waiting for owner resource "ownerh27bs" to be deleted
I1123 00:18:40.707948 19717 garbagecollector.go:253] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"mygroup.example.com/v1beta1", Kind:"foo8xjvqa", Name:"ownerrg8c5", UID:"c32bb543-cfe3-11e7-8085-0242ac110002", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"crd-mixed"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{(*garbagecollector.node)(0xc4286c6340):struct {}{}}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference(nil)}: unable to get REST mapping for mygroup.example.com/v1beta1/foo8xjvqa.
I1123 00:18:45.065730 19717 garbagecollector.go:184] no resource updates from discovery, skipping garbage collector sync
I1123 00:18:50.088628 19717 garbagecollector.go:184] no resource updates from discovery, skipping garbage collector sync
I1123 00:18:55.112194 19717 garbagecollector.go:184] no resource updates from discovery, skipping garbage collector sync
I1123 00:18:59.797854 19717 garbagecollector.go:154] Shutting down garbage collector controller
I1123 00:18:59.797953 19717 graph_builder.go:348] stopped 42 of 42 monitors
I1123 00:18:59.797964 19717 graph_builder.go:349] GraphBuilder stopping
I1123 00:18:59.810016 19717 crdregistration_controller.go:139] Shutting down crd-autoregister controller
I1123 00:18:59.810050 19717 autoregister_controller.go:160] Shutting down autoregister controller
I1123 00:18:59.810077 19717 customresource_discovery_controller.go:163] Shutting down DiscoveryController
I1123 00:18:59.810096 19717 controller.go:90] Shutting down OpenAPI AggregationController
I1123 00:18:59.810115 19717 naming_controller.go:285] Shutting down NamingConditionController
I1123 00:18:59.810116 19717 apiservice_controller.go:124] Shutting down APIServiceRegistrationController
I1123 00:18:59.810146 19717 available_controller.go:274] Shutting down AvailableConditionController
I1123 00:18:59.810100 19717 crd_finalizer.go:254] Shutting down CRDFinalizer
I1123 00:18:59.810293 19717 serve.go:129] Stopped listening on [::]:46103
2017-11-23 00:18:59.810285 I | integration: terminating 1637533451367295481 (unix://localhost:16375334513672954810)
2017-11-23 00:18:59.835727 I | integration: terminated 1637533451367295481 (unix://localhost:16375334513672954810)
--- FAIL: TestMixedRelationships (66.48s)
testserver.go:72: Starting etcd...
testserver.go:95: Starting kube-apiserver on port 46103...
testserver.go:106: Waiting for /healthz to be ok...
garbage_collector_test.go:888: created custom owner "ownerrg8c5"
garbage_collector_test.go:897: created core dependent "dependents5dj4"
garbage_collector_test.go:904: created core owner "owner99lrz": &v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"owner99lrz", GenerateName:"", Namespace:"crd-mixed", SelfLink:"/api/v1/namespaces/crd-mixed/configmaps/owner99lrz", UID:"c337fd82-cfe3-11e7-8085-0242ac110002", ResourceVersion:"49", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63646993079, loc:(*time.Location)(0x9091660)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Data:map[string]string(nil)}
garbage_collector_test.go:915: created custom dependent "dependent8zq9w"
garbage_collector_test.go:929: failed waiting for owner resource "ownerrg8c5" to be deleted
cc @ironcladlou @caesarxuchao
@kubernetes/sig-api-machinery-test-failures
/kind bug
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/55999/pull-kubernetes-unit/67810/
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/55902/pull-kubernetes-unit/67814/
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/55902/pull-kubernetes-unit/67814/
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/56260/pull-kubernetes-unit/67813/
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/47275/pull-kubernetes-unit/67795/
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/56249/pull-kubernetes-unit/67793/
top three flakes for the pull-kubernetes-unit job: http://storage.googleapis.com/k8s-metrics/flakes-latest.json (131 flakes)
[MILESTONENOTIFIER] Milestone Issue Labels Incomplete
Action required: This issue requires label changes. If the required changes are not made within 2 days, the issue will be moved out of the v1.9 milestone.
priority: Must specify exactly one of priority/critical-urgent, priority/important-longterm or priority/important-soon.
[MILESTONENOTIFIER] Milestone Issue Needs Approval
@liggitt @kubernetes/sig-api-machinery-misc
Action required: This issue must have the status/approved-for-milestone label applied by a SIG maintainer.
sig/api-machinery: Issue will be escalated to these SIGs if needed.priority/critical-urgent: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.kind/bug: Fixes a bug discovered during the current release.—
/status approved-for-milestone
[MILESTONENOTIFIER] Milestone Issue Needs Attention
@liggitt @kubernetes/sig-api-machinery-misc
Action required: During code freeze, issues in the milestone should be in progress.
If this issue is not being actively worked on, please remove it from the milestone.
If it is being worked on, please add the status/in-progress label so it can be tracked with other in-flight issues.
Note: This issue is marked as priority/critical-urgent, and must be updated every 1 day during code freeze.
Example update:
ACK. In progress
ETA: DD/MM/YYYY
Risks: Complicated fix required
sig/api-machinery: Issue will be escalated to these SIGs if needed.priority/critical-urgent: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.kind/bug: Fixes a bug discovered during the current release.—
[MILESTONENOTIFIER] Milestone Issue Needs Attention
@caesarxuchao @ironcladlou @liggitt @kubernetes/sig-api-machinery-misc
Action required: During code freeze, issues in the milestone should be in progress.
If this issue is not being actively worked on, please remove it from the milestone.
If it is being worked on, please add the status/in-progress label so it can be tracked with other in-flight issues.
Note: This issue is marked as priority/critical-urgent, and must be updated every 1 day during code freeze.
Example update:
ACK. In progress
ETA: DD/MM/YYYY
Risks: Complicated fix required
Issue Labels
sig/api-machinery: Issue will be escalated to these SIGs if needed.priority/critical-urgent: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.kind/bug: Fixes a bug discovered during the current release.—
[MILESTONENOTIFIER] Milestone Issue Needs Attention
@caesarxuchao @ironcladlou @liggitt @kubernetes/sig-api-machinery-misc
Action required: During code freeze, issues in the milestone should be in progress.
If this issue is not being actively worked on, please remove it from the milestone.
If it is being worked on, please add the status/in-progress label so it can be tracked with other in-flight issues.
Action Required: This issue has not been updated since Nov 23. Please provide an update.
Note: This issue is marked as priority/critical-urgent, and must be updated every 1 day during code freeze.
Example update:
ACK. In progress
ETA: DD/MM/YYYY
Risks: Complicated fix required
Issue Labels
sig/api-machinery: Issue will be escalated to these SIGs if needed.priority/critical-urgent: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.kind/bug: Fixes a bug discovered during the current release.—
[MILESTONENOTIFIER] Milestone Issue Needs Attention
@caesarxuchao @ironcladlou @liggitt @kubernetes/sig-api-machinery-misc
Action required: During code freeze, issues in the milestone should be in progress.
If this issue is not being actively worked on, please remove it from the milestone.
If it is being worked on, please add the status/in-progress label so it can be tracked with other in-flight issues.
Action Required: This issue has not been updated since Nov 23. Please provide an update.
Note: This issue is marked as priority/critical-urgent, and must be updated every 1 day during code freeze.
Example update:
ACK. In progress
ETA: DD/MM/YYYY
Risks: Complicated fix required
Issue Labels
sig/api-machinery: Issue will be escalated to these SIGs if needed.priority/critical-urgent: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.kind/bug: Fixes a bug discovered during the current release.—
[MILESTONENOTIFIER] Milestone Issue Needs Attention
@caesarxuchao @ironcladlou @liggitt @kubernetes/sig-api-machinery-misc
Action required: During code freeze, issues in the milestone should be in progress.
If this issue is not being actively worked on, please remove it from the milestone.
If it is being worked on, please add the status/in-progress label so it can be tracked with other in-flight issues.
Action Required: This issue has not been updated since Nov 23. Please provide an update.
Note: This issue is marked as priority/critical-urgent, and must be updated every 1 day during code freeze.
Example update:
ACK. In progress
ETA: DD/MM/YYYY
Risks: Complicated fix required
Issue Labels
sig/api-machinery: Issue will be escalated to these SIGs if needed.priority/critical-urgent: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.kind/bug: Fixes a bug discovered during the current release.—
TestMixedRelationships flake looks like a failure to discover custom resource types registered during the test. Investigating.
[MILESTONENOTIFIER] Milestone Issue Needs Attention
@caesarxuchao @ironcladlou @liggitt @kubernetes/sig-api-machinery-misc
Action required: During code freeze, issues in the milestone should be in progress.
If this issue is not being actively worked on, please remove it from the milestone.
If it is being worked on, please add the status/in-progress label so it can be tracked with other in-flight issues.
Note: This issue is marked as priority/critical-urgent, and must be updated every 1 day during code freeze.
Example update:
ACK. In progress
ETA: DD/MM/YYYY
Risks: Complicated fix required
Issue Labels
sig/api-machinery: Issue will be escalated to these SIGs if needed.priority/critical-urgent: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.kind/bug: Fixes a bug discovered during the current release.—
Just a quick note for anybody else looking into these: I was able to reproduce the TestMixedRelationships discovery failure with stress:
# prereq: start etcd on http://localhost:2379
go test -i ./test/integration/garbagecollector
go test -c ./test/integration/garbagecollector
stress ./garbagecollector.test -alsologtostderr -vmodule graph_builder*=6,garbagecollector=6 -test.run ^TestMixedRelationships
Hit it once after 30 iterations and again after 129 runs.
I think I've identified the root cause of these flakes. Will open a PR soon.
/status in-progress
[MILESTONENOTIFIER] Milestone Issue Current
@caesarxuchao @ironcladlou @liggitt
Note: This issue is marked as priority/critical-urgent, and must be updated every 1 day during code freeze.
Example update:
ACK. In progress
ETA: DD/MM/YYYY
Risks: Complicated fix required
Issue Labels
sig/api-machinery: Issue will be escalated to these SIGs if needed.priority/critical-urgent: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.kind/bug: Fixes a bug discovered during the current release.—