—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
Actually looking back through the logs more, it's more broadly kubectl tests that flake:
[sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance][sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance]@BenTheElder These flakes are all caused by network connection, and we could do nothing but retry when it happens.
Should these test explicitly handle the network problem caused by test environment? OTOH maybe we could try to fix the network.
@BenTheElder, @yue9944882 : we can see in the test dashboard of "sig-sluster-lifecycle" or the test dashboard of sig-network-gce, that this serie of test "sig-cli" is falking when it is a deployment with kubeadm in gce.
But looks like it does not depend the type of network used (default, ipvs, calico, flannel - although obviously calico has more issues).
and when the test fail, the error is always the same:
could not convert scale update to external Scale: scheme.Scale is not suitable for converting to \"v1\"
These same tests are not flaking, when the deployment is usual kube-up with gce. see here : http://k8s-testgrid.appspot.com/sig-network-gce#gci-gce-coredns
Does it tilt a light for one of you ?
@fturib I think that warrants its own, higher priority issue. these flake with low regularity in the oldest supported release.
Also as a note, I am setting up GCE conformance tests for the ongoing release today (separate from master) and next week I plan to raise the question of blocking releases on a conformance suite.
I created an issue #64450 for these flaky sig-cli tests related to kubeadm/gce.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Closed #64110.
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.