interested parties:
@mtaufen (ongoing kubelet work)
@kubernetes/sig-cli-misc (kubectl)
@kubernetes/sig-api-machinery-misc (apiservers)
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
We should, at the very least, start using our existing API doc generators to produce documentation for componentconfig APIs.
Not sure whether this is directly related, but we've also got a bit of a mystery with either the generation scripts or the code:
In the 1.10 apiref docs, PodSecurityPolicy v1beta1 policy
, which was new (moved) for 1.10, appears in the list of OLD API VERSIONS, while PodSecurityPolicy v1beta1 extensions
, which it replaced, appears in the METADATA section, making it appear as though it's the current resource. When in fact it's the other way around, yes?
I can pull this out into a separate issue if y'all deem it unrelated. I haven't done enough digging through the code repo, nor do I understand enough about dev in k/k, to be sure about the relationships between different moving parts of larger K8s ecosystem. Or between the different bits of doc generators.
api doc generation is a separate issue. There's nothing in the k/k repo I'm aware of that classifies api groups as "old", so I'd guess it's a matter of unrecognized API groups getting categorized that way by the doc gen scripts. Separate issue, in any case.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Closed #8313.
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen
.
Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.