[NOTICE] Test jobs that use ./get-kube.sh (e2es, etc.) on release branches are currently failing

32 views
Skip to first unread message

Stephen Augustus

unread,
Dec 13, 2019, 3:04:26 PM12/13/19
to Kubernetes developer/contributor discussion, release-...@kubernetes.io, kubernetes-...@googlegroups.com, kubernetes-...@googlegroups.com
Hi kubefolx,

There seems to be a recent change that has modified the way version markers are written to GCS buckets.
What that means is that jobs that rely on ./get-kube.sh to pull the latest CI build from GCP, like a vast majority of our e2e tests, are currently failing.

As far as I've seen, this issue only seems to be presenting on the release-x.y branches, which suggests the change happened some time between we cut the recent patch releases (12/11) and the previous patch set (11/13).

Tim Pepper and I began investigating this last night and will continue today.

Please note that a few issues have already been open for this:

As soon as I have more information, I'll let you know and update that issue.

-- Stephen

Stephen Augustus

unread,
Dec 14, 2019, 12:01:26 AM12/14/19
to Kubernetes developer/contributor discussion, release-...@kubernetes.io, kubernetes-...@googlegroups.com, kubernetes-...@googlegroups.com

Hey again,

I promised an update...

Normally, during an official patch release, we:

  • Cut a tag for the patch release - v1.y.z
  • Cut a tag for the next beta for that branch - v1.y.(z+1)-beta.0

It appears that our most recent patch releases tagged branches out of order, beta tag first, then the patch tag. 

This time around, the patch tag and beta tag landed on the same commit.

We use a combination of git describe and regexes (of course) to determine the tag and whether or not the tag is a CI tag. As the tags landed on the same commit, our git describe was ambiguous and picked up something in the form of v1.y.z-<number-of-commits-past-tag>+<commit-ish>, which makes our CI version regex unhappy.

To mitigate this, we manually tagged the tip of the affected release branches:

We validated that this would work against the release-1.14 branch before proceeding with the 1.15 and 1.16 branches.

Release Engineering will still be working on identifying why the builds ran out of order, but in the meantime, you should see your tests start to go green again.

More details can be found on the tracking issue: https://github.com/kubernetes/kubernetes/issues/86182

Thanks to @tpepper, @ixdy, and @liggitt with their help in debugging this!

Have a great weekend, y'all! 
Stephen 
Reply all
Reply to author
Forward
This conversation is locked
You cannot reply and perform actions on locked conversations.
0 new messages