New Contributors,
Thanks again for signing up for mentorship in our working group! Below, you can find some more specific instructions and next steps for each subproject. Please read the instructions carefully, and remember that you can ask any questions you have on our mentorship-specific slack channel,
#wg-component-standard-mentorship (it's best to ask questions in this channel so they can be answered by any available mentor, and so that other contributors can see the answers).
We recognize that all of this is likely very new to all of you, and can be confusing, so please don't hesitate to ask questions! We are here to help you understand how things work. Please also don't feel rushed to figure everything out right away. Kubernetes can have a steep learning curve, so it's ok to take a week or two to work through the "homework" below.
Please also keep in mind that some subprojects may require less work/finish sooner than others. Just because your project finished, doesn't have to mean the mentorship program is over for you. We will work to find you more things to work on, so you can continue to grow on the
community ladder.
A note on process and check-ins.
We want to make sure you get the support you need to move your work forward. Due to the number of new contributors in our working group, we likely won't be able to do regular 1:1 meetings, but we still want to make sure we are regularly communicating and helping you to advance in the community.
Is the weekly working group meeting enough for this? Should we set up an additional weekly office hours meeting specific to the mentorship program? Please let us know what works best for you, and we'll try to accommodate. We will also discuss this in our next weekly meeting (Tuesdays, 8:30am PT).
Finally, if you've assigned something on GitHub to a mentor, but aren't sure if they've seen it, please ping us on Slack. Email filters aren't perfect, and GitHub is noisy, so even though we're doing our best we sometimes miss the notifications.
First, make sure you're set up for Kubernetes development.
If you signed up to work on Flag to Config Migrations:
(@savitharaghunathan, @McCoyAle, @mayankshah1607, @palnabarun, @bharaththiruveedula, @LalatenduMohanty)
These instructions will take you through familiarizing yourself with your components code, producing a list of flags to migrate, and sending your first migration PR.
The first step, if you are working on a component with another contributor (kubelet and controller managers) is to meet each-other over Slack and decide how you would like to collaborate with each-other.
Once you've met each other, take a little time to familiarize yourself with the following:
- The code for your component. Take a little time to find where flags are registered and where the current ComponentConfig API lives. You can find some general tips in the Flag Migration Guide, as well as specific pointers for kubelet and kube-proxy. Not all of the components have specific code pointers in that doc yet, so if you feel up to the challenge feel free to try to fill out the missing areas (but don't feel obligated to do so at this stage).
- You may want to read through the first few sections of the Versioned Component Configuration Files document to understand some of the history and motivation for ComponentConfig.
Once you've met each other and feel familiar with the code, the next step is to audit your component and produce a list of flags that have not yet been migrated to config. You can share the list by creating a single* GitHub issue for your component. Please assign the list to @mtaufen for review before you get started migrating flags. If you aren't sure if a flag should be migrated or not, add it to your list anyway and note next to it why you weren't sure, so @mtaufen can double-check.
Guidelines for which flags should be migrated can be found in the Flag Migration Guide. These are reprinted here for convenience:
- The --config flag should not be moved to config, leave it alone.
- If it is instance-specific, meaning that it's value must be unique for every instance of the component that uses the config (say, the Pod or Node's IP address), leave it alone for now. We haven't decided how to handle these yet.
- If it applies to a specific platform, such as Windows-only, leave it alone for now.
- If it's a flag that came from a third-party library, leave it alone for now. We haven't decided how to handle these yet.
- Double-check that it isn't already deprecated. Deprecated flags will eventually be removed, so there is no need to migrate them to config.
*In the past I (@mtaufen) tried the approach of creating an issue per Kubelet flag that needed to be migrated. This ultimately caused too many emails to fill contributors inboxes and made it hard to track all of the issues and keep them from going stale. So we believe one issue per component is the best way to go for now.
Once @mtaufen has reviewed and approved your list, migrate a single flag each (you may migrate a few if they are very closely related), and assign the PR to @mtaufen for review. Once it has been reviewed and merged, continue on to the remaining flags. If working with another contributor, you should decide how to split up the remaining work. Please only migrate one or a few related flags in each PR so that PRs are easy to review.
If you signed up to work on a legacyflag Prototype:
(@RainbowMango - Kubelet, @alejandrox1 - kube-proxy)
First, we highly recommend reading the
Versioned Component Configuration Files document from last year. This doc describes a lot of the history and motivation for ComponentConfig, and the problems with the implementation in the Kubelet that
legacyflag ultimately aims to solve. Please also read the
legacyflag KEP for an understanding of what legacyflag is designed to do. Finally, take a look at the (old, but still relevant)
kflag example PR, which contains a
file (example.go) that demonstrates how to use legacyflag in practice (legacyflag used to be called kflag, but not much has changed between the two versions).
Once you're up to speed, try migrating just the flags that are already represented in ComponentConfig to use legacyflag. Keep in mind, one of the major goals of legacyflag is to eliminate more complicated approaches to merging flags and config. In the Kubelet, this includes eliminating the need for the
kubeletConfigFlagPrecedence implementation in favor of legacyflag's approach.
Please post a PR with your ongoing work titled "WIP PoC legacyflag in [component]", where "component" is the name of the component you're working on, and assign it to @mtaufen. This will make it easier for mentors to track the progress and answer your questions.
If you signed up to work on ComponentConfig Ergonomics:
- Instance-specific config (@praveensastry)
- This subproject involves standardizing a solution to the "instance-specific" config problem. Please read the below resources and then reach out to @mtaufen to double-check your understanding of the problem. Once you have a good understanding of the problem, you'll progress to writing a KEP to propose your solution to the community, and finally work on refactoring components to implement your proposal.
- The problem comes from the fact that multiple related instances of a component should be able to refer to a deduplicated config object, but the presence of fields that must be set to a unique value for each instance prevents this. We need to decide how to solve this (most likely by splitting instance-specific configuration into a separate file/struct).
- Please read the Versioned Component Configuration Files document, especially the "Unsolved Problems" section at the end, which refers to the instance-specific config issue.
- Please also read issue #61647, which outlines a potential, simple solution to the problem.
- As part of this project, you'll first familiarize yourself with the overall problem, then write a KEP (Kubernetes Enhancement Proposal) for your solution and send the KEP to the community for review. Please read about the KEP process here. Once you're familiar with that, go ahead and write the KEP, and assign @mtaufen and @stealthybox as initial reviewers.
- Enabling strict decoders across components (@phenixblue)
- The first step is to reach out to @obitech, who has been driving this line of work and working on a few different patches, and find out where he needs help. It may turn out that there is not that much work left, in which case we will find another project for you to help with once strict decoders are enabled.
- Increase test coverage for the ComponentConfig pattern (@tahsinrahman)
- Prior to stepping away from the working group, @luxas was working on increasing test coverage for components using the ComponentConfig pattern. This work included ensuring all ComponentConfig APIs had round-trip tests and verifications that the API was implemented properly (such as ensuring the ".k8s.io" suffix exists on the registered API group name). @luxas maintained a WIP PR with examples of these ideas.
- First, we recommend reading the first few sections of the Versioned Component Configuration Files document, to familiarize yourself with the basic ideas behind ComponentConfig.
- Then, take a look at @luxas's WIP PR and try to put together a list of the things it was attempting to test. If you think of other things that could or should be tested, add those to your list too!
- To ensure we standardize on a solid approach for testing ComponentConfig APIs, once you have an understanding of the problem space, you should write a KEP with your ideas for improving ComponentConfig testing and send it to the community for review. Please read about the KEP process and when you're ready, you can send your proposal for review and assign @mtaufen and @stealthybox.
If you proposed a new subproject for the working group:
We will reach out to you on Slack to discuss and help you shape your project idea, and ensure it fits the scope of the working group. If it doesn't fit the scope of the working group, we can help you find a project that does fit the scope.
Thanks, and we look forward to working together with you! Again, if you have questions, please feel free to ask in the mentorship Slack channel.
Best,
Mike
--