When: Weekly on Wed, 9:45 – 10:15am
Notes: KubeVirt CI SIG meeting notes
Attendees: dollierp, dhiller,
Reminders:
we will create GitHub issues for tracking
GitHub issues and PRs
should be marked with /sig ci and /kind flake if applicable
should be marked with the target sig
Topics:
[urgent]
[dollierp] GitHub sent a notice stating that the webhook secrets were leaked in HTTP Headers by GitHub between September 2025 and January 2026.
The risk of having leaked the secret externally is very low because it seems Hook and external plugins don’t log the headers (and the logs are already rotated anyway).
Regardless, to be 100% safe, all the managed_webhooks should be rotated (doc).
Note that the transition to a GitHub App will simplify webhook management, a single webhook declared at the App level will be used instead of one declared for each orgs/repos.
[dollierp] It seems nodes kubevirt-worker-bm04 and kubevirt-worker-bm12 faced transient DNS outages around 2026-04-14 5:00 PM GMT+2 for a couple of hours
review and merge the quarantine PRs in 2 working days without SIG lgtm.
sig-scale q: https://github.com/kubevirt/kubevirt/pull/17456
there’s an open PR proposed to fix the issue: https://github.com/kubevirt/kubevirt/pull/17495
not necessary any more, since the values have recovered
unexpected bot activity requires a fix
previous action items
state of existing issues: https://github.com/kubevirt/kubevirt/issues?q=is%3Aissue+is%3Aopen+label%3Akind%2Fflake+sort%3Aupdated-asc+label%3Asig%2Fci
n/a
[non-urgent]
[nirdothan] pr still not approved
still under discussion, needs to get resolved
Look at flakes
flake stats - create issues accordingly
overall ( ∑=724, 100.00% )
periodic-kubevirt-e2e-test-S390X ( ∑=284, 39.23% )
seems to be timing out consistently: example https://prow.ci.kubevirt.io/view/gs/kubevirt-prow/logs/periodic-kubevirt-e2e-test-S390X/2044151353458036736
TODO create or update related issue
periodic-kubevirt-e2e-k8s-1.35-sig-compute ( ∑=77, 10.64% )
no specific abnormalities
periodic-kubevirt-e2e-k8s-1.34-sig-compute ( ∑=66, 9.12% )
no specific abnormalities
periodic-kubevirt-e2e-k8s-1.34-sig-monitoring ( ∑=53, 7.32% )
increased rate of flakes in the lane
periodic-kubevirt-e2e-k8s-1.35-sig-storage ( ∑=53, 7.32% )
no specific abnormalities
periodic-kubevirt-e2e-k8s-1.34-sig-storage ( ∑=48, 6.63% )
no specific abnormalities
pull-kubevirt-e2e-k8s-1.35-sig-compute-serial ( ∑=27, 3.73% )
two clusters of failing tests
pull-kubevirt-e2e-kind-1.35-sig-compute-arm64 ( ∑=23, 3.18% )
seems to be recovering from an overall state of failure to normal
pull-kubevirt-e2e-k8s-1.35-ipv6-sig-network ( ∑=18, 2.49% )
no specific abnormalities
periodic-kubevirt-e2e-k8s-1.35-ipv6-sig-network ( ∑=14, 1.93% )
no specific abnormalities
pull-kubevirt-e2e-k8s-1.34-sig-compute ( ∑=8, 1.10% )
no specific abnormalities
Last updated: 2026-04-15 07:07:40.075019201 +0000 UTC m=+12.755299935
Look at held tests:
dequarantine tests:
look at list of quarantined tests
Count: 20 tests in quarantine currently
check status, i.e. who is working on those
look at PRs that want to fix flakes
see whether we can dequarantine tests
misc topics
[dollierp] testgrid
merged new jobs for livez cluster monitoring, those don’t show up in the grid
dhiller will take a look
Action items
update/create issues with latest flakes spotted
communication
send meeting notes to kubevirt-dev, bcc sig people for spotted flakes (include meeting changes for upcoming instances)
Kind regards,
Daniel Hiller
He / Him / His
Principal Software Engineer, KubeVirt CI, OpenShift Virtualization
![]() |
Red Hat GmbH, Registered seat: Werner von Siemens Ring 12, D-85630 Grasbrunn, Germany Commercial register: Amtsgericht Muenchen/Munich, HRB 153243, Managing Directors: Ryan Barnhart, Charles Cachera, Avril Crosse O'Flaherty