When: Weekly on Wed, 9:45 – 10:15am
Notes: KubeVirt CI SIG meeting notes
Attendees: dhiller, brianmcarey, fossedihelm, nirdothan
Reminders:
we will create GitHub issues for tracking
GitHub issues and PRs
should be marked with /sig ci and /kind flake if applicable
should be marked with the target sig
Topics:
[urgent]
previous action items
state of existing issues: https://github.com/kubevirt/kubevirt/issues?q=is%3Aissue+is%3Aopen+label%3Akind%2Fflake+sort%3Aupdated-asc+label%3Asig%2Fci
[non-urgent]
Look at flakes
flake stats - create issues accordingly
count: 1927 failures overall
sig-compute
347: 1.35 periodics issue: https://testgrid.k8s.io/kubevirt-periodics#periodic-kubevirt-e2e-k8s-1.35-sig-compute&width=20
postponed until after ga
(existing) https://github.com/kubevirt/kubevirt/issues/15976
131: 1.33 periodics lane failing probably due to an older clustered failure: https://testgrid.k8s.io/kubevirt-periodics#periodic-kubevirt-e2e-k8s-1.33-sig-compute&width=20
121: 1.34 periodics has a new clustered failure: https://testgrid.k8s.io/kubevirt-periodics#periodic-kubevirt-e2e-k8s-1.34-sig-compute&width=20
(commented on existing issue since similar symptoms) https://github.com/kubevirt/kubevirt/issues/16084#issuecomment-3551842225
96: 1.32 periodics doesn’t show anything obvious, needs to get investigated: https://testgrid.k8s.io/kubevirt-periodics#periodic-kubevirt-e2e-k8s-1.32-sig-compute&width=20
main contributor to failures are the quarantined tests
83: 1.34 serial lane https://testgrid.k8s.io/kubevirt-presubmits#pull-kubevirt-e2e-k8s-1.34-sig-compute-serial&width=20
(commented on existing issue since similar symptoms) https://github.com/kubevirt/kubevirt/issues/16084#issuecomment-3551713104
7: AfterSuite failures - likely a symptom of failed cleanup after a clustered failure or cluster breakdown https://storage.googleapis.com/kubevirt-prow/reports/flakefinder/kubevirt/kubevirt/flake-stats-14days-2025-11-19.html#AfterSuite
search.cii query shows all failures related to cluster breakdown, symptom
q: test marked as sig-compute is running on the storage lane? https://storage.googleapis.com/kubevirt-prow/reports/flakefinder/kubevirt/kubevirt/flake-stats-14days-2025-11-19.html#%5brfe_id%3a393%5d%5bcrit%3ahigh%5d%5bvendor%3acnv-qe%40redhat.com%5d%5blevel%3asystem%5d%5bsig-compute%5d%20Live%20Migration%20across%20namespaces%20with%20migration%20policy%20should%20be%20able%20to%20cancel%20a%20migration%20by%20deleting%20the%20migration%20resource%20delete%20source%20migration
dequarantine tests:
look at list of quarantined tests
count: 14 tests in quarantine currently
check status, i.e. who is working on those
look at PRs that want to fix flakes
n/a
see whether we can dequarantine tests
n\a
misc topics
[bc] there was a github outage last night that impacted a number of prow jobs
Action items
update/create issues with latest flakes spotted
communication
send meeting notes to kubevirt-dev, bcc sig people for spotted flakes (include meeting changes for upcoming instances)
Kind regards,
Daniel Hiller
He / Him / His
Principal Software Engineer, KubeVirt CI, OpenShift Virtualization
![]() |
Red Hat GmbH, Registered seat: Werner von Siemens Ring 12, D-85630 Grasbrunn, Germany Commercial register: Amtsgericht Muenchen/Munich, HRB 153243, Managing Directors: Ryan Barnhart, Charles Cachera, Avril Crosse O'Flaherty