Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFE: Make kubeadm Config easy to consume #1881

Closed
fabriziopandini opened this issue Nov 4, 2019 · 12 comments
Closed

RFE: Make kubeadm Config easy to consume #1881

fabriziopandini opened this issue Nov 4, 2019 · 12 comments
Assignees
Labels
kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. wg/component-standard Categorizes an issue or PR as relevant to WG Component Standard.

Comments

@fabriziopandini
Copy link
Member

Kubeadm config should be more easy to consume for higher-level tools like CAPI.

The v1beta3 is going to fix inconsistencies on omitempty/+optional and the lack of ObjectMeta, but still, the fact that there are multiple versions of the Kubeadm config is a problem for higher-level tools like CAPI.

How can we make the Kubeadm config more easy to consume?

@fabriziopandini fabriziopandini added priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. kind/feature Categorizes issue or PR as related to a new feature. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API labels Nov 4, 2019
@fabriziopandini fabriziopandini added this to the v1.17 milestone Nov 4, 2019
@neolit123
Copy link
Member

neolit123 commented Nov 4, 2019

How can we make the Kubeadm config more easy to consume?

expose converters on the side of public types and don't use internal types as a stepping stone for conversion.

which roughly translates to A) don't use api-machinery or B) redesign api-machinery.

@neolit123
Copy link
Member

neolit123 commented Nov 4, 2019

@fabriziopandini @timothysc @ncdc
as @rosti attempted to explain this is not a kubeadm <-> capi only problem.

this is a problem for:
/wg component-standard

kubeadm has the same problem of consuming the kubelet config. if v1beta1 is gone but the users still use it it should just fail. then it's the responsibility of the user to convert to the new preferred type.

discussions about making all components (e.g. kubelet, apiserver etc) exposing conversion on their CLI or via a serving endpoint has seen objection thus far. but frankly we didn't have enough discussions.

as a first step i think CAPI should stop including the kubeadm type as a field and use the following pattern instead:

capi-config bits
kubeadmConfigRefs: v1beta2-ref,v1beta1-ref
---
v1beta1 kubeadm config bits
name: v1beta1-ref
---
v1beta2 kubeadm config bits
name: v1beta2-ref

then the kubeadm binary can be used to validate all referenced configs and use the first one that passes (we might have to add a command / flag for that).

if the user is feeding a config that is out of support there is no way for us to support it on the fly, unless CAPI bundles older kubeadm binaries for the sake of conversion.

cc @detiber @vincepri

@k8s-ci-robot k8s-ci-robot added the wg/component-standard Categorizes an issue or PR as relevant to WG Component Standard. label Nov 4, 2019
@detiber
Copy link
Member

detiber commented Nov 4, 2019

So controller-runtime has built some tooling around conversions using only external types. I don't know how re-usable it might be for the purposes of kubeadm, and the dependency tree could present quite a few issues at least until kubeadm moves out of tree.

@neolit123 neolit123 added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. labels Nov 4, 2019
@neolit123
Copy link
Member

reducing the priority as this is currently non-blocking, given v1beta1 is going to be available until 1.20.

@neolit123
Copy link
Member

neolit123 commented Nov 13, 2019

some of the complexities were discussed in today's cluster-api meeting.
https://youtu.be/NFyYd-vuB30

mitigating the lack of feedback from cloud init on verification:

the CABPK configuration object is user created, or at least the user maintains the part about the kubeadm config. this also means the kubeadm config can be verified on the user side.

i do not know if CABPK dynamically modifies the kubeadm config the user provides.

prior a CAPI deployment that uses CABPK, can the kubeadm config be passed to a kubeadm binary, say in makefile rule that downloads the binary from the internet? this will ensure that the kubeadm config is validated before it reaches cloud init.

same can be done for the component configs with their respective binaries, and once they support validation on the CLI side.

propagating kubeadm config fields

the other reason the kubeadm config is embedded is due to it's values are used by the cluster-api in other locations.

this problem can solved with templates, which is not pretty but it can work:

....
kubeadmVariablesThatCAPICaresAbout:
- "clusterDNS": "someValue"
- "foo": "bar"
kubeadmConfig: |
...
  {{ .clusterDNS }} # it's the user responsibility to add these

of course, these proposals are attempting to avoid version pining.

@neolit123 neolit123 modified the milestones: v1.17, v1.18 Nov 19, 2019
@neolit123 neolit123 added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Jan 21, 2020
@neolit123 neolit123 modified the milestones: v1.18, v1.19 Mar 8, 2020
@neolit123 neolit123 modified the milestones: v1.19, v1.20 Jun 2, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 31, 2020
@neolit123
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 31, 2020
@neolit123 neolit123 assigned neolit123 and unassigned timothysc Oct 6, 2020
@neolit123 neolit123 modified the milestones: v1.20, v1.21 Dec 2, 2020
@neolit123 neolit123 removed this from the v1.21 milestone Feb 8, 2021
@neolit123 neolit123 modified the milestones: v1.22, v1.21 Feb 8, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 9, 2021
@neolit123
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 10, 2021
@neolit123
Copy link
Member

cluster api now implements a new method for consuming the kubeadm types by forking them, but eventually they can consume a vendored version.

@vincepri
Copy link
Member

vincepri commented Jun 9, 2021

@neolit123 Is there a reason to not consider a staging repository and publish kubeadm that way as a go module?

@neolit123
Copy link
Member

neolit123 commented Jun 9, 2021

it's complicated:

  1. staging is a mechanism that k8s has problems decoupling from. once something is in staging the plan is unclear. kubectl are trying to trailblaze by completely decoupling, but i don't think they have a complete plan for 2, CI signal and issue triage.
  2. the k8s release process cannot release components from external repositories e.g. kubectl, kubeadm.
  3. if we move parts of kubeadm to a repository as a library we cannot vendor this part back to k/k, since e.g. kubeadm imports core types and k/k also imports core types (and this is disallowed).

we already have plans to make the API vendorable via #2316 or #1949, but this cannot happen soon.

public converters are a whole different topic, but i will not comment on that in this post.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. wg/component-standard Categorizes an issue or PR as relevant to WG Component Standard.
Projects
None yet
Development

No branches or pull requests

7 participants