-
Notifications
You must be signed in to change notification settings - Fork 716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move Control Planes taint to kubelet config instead of markcontrolplane phase #1621
Comments
i guess this can be a real problem. thanks for the report @yagonobre does the kubelet allow self-taints with node roles such as |
Yes, probably we can keep the phase to add the label |
@neolit123 : Can I take a crack at this? |
@madhukar32 |
I was away, but now I'll have enough time to do this. |
we might have to keep it in both places with a deprecation notice. |
We are seeing this when joining control plane nodes to existing clusters (1.15). We use nginx-ingress-controller as a daemonset, and it's on host port 443, same as the apiserver. So the apiserver always ends up in a CrashLoop until I manually delete the pod. |
this is a known problem. changes in kubeadm phases are tricky - the existing workaround can be seen above, but we might have to have a period of time where we both taint using the kubelet configuration and the kubeadm mark-control-plane phase, potentially deprecating the tainting in the phase in the future. |
Not sure I understand how the workaround works. Isn't InitConfiguration used only during init of the first master? Or can it be updated in the configmap in kube-system and used during join --control-plane? |
both init and join configurations have the node registration options: https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2#NodeRegistrationOptions tainting using the KubeletConfigurtation is not possible. |
I've tried to use a JoinConfiguration to setup additional masters, but then I just get this message:
Related issue I found: #1485 |
Today I add the taint directly to the kubelet config file. I'll try to work on it soon. |
Yago, i dont see a field for that in the kubeletconfiguration.
|
I've tried to use a JoinConfiguration to setup additional masters, but then I just get this message:
The join configuration can do that. Some flags and config cannot be mixed.
|
Sorry, it's a flag. |
@yagonobre i just tied passing what k8s version have you tried this with? |
I'm using `v1.16.2`, but I'll try with the last version.
…On Thu, Dec 5, 2019 at 7:59 PM Lubomir I. Ivanov ***@***.***> wrote:
@yagonobre <https://github.com/yagonobre> i just tied passing
--register-with-taints=node-role.kubernetes.io/maste r:NoSchedule to the
kubelet instead of using the markcontorlplane phase for the taint and both
the CNI and coredns are stuck in Pending saying that there is no Node to
schedule on (even if they tolerate the master taint).
what k8s version have you tried this with?
i'm testing with the latest 1.18.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1621?email_source=notifications&email_token=ACJ5C2YRJKOV45LD5QDMOBTQXGBVDA5CNFSM4HZKKCA2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGCOCCQ#issuecomment-562356490>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACJ5C24GER64KYQH5XHTMRDQXGBVDANCNFSM4HZKKCAQ>
.
|
While we established that the tainting is late and can cause problems we
have not seen reports from more users about it. Paco, are you seeing the
problem on your side with customers?
Before we proceed with PRs i think we need to get back to the discussion if
we want to do it. It also feels a bit odd to separete the taint and labels.
Perphaps we want them to continue to be done ny the kubeadm client instead
of kubelet.
|
Not meet this recently.
If we move it to the kubelet, the taint will be back once the kubelet restarts. It would make some users confused if they remove the tainted on purpose. It seems to be a behavior change. |
that is indeed a concern |
IMO, it is a design problem of kubelet. One solution that I know is that kubelet should only add the node labels or taint only if it is the first time bootstrap and register to apiserver. After the bootstrap(node is already added to cluster), kubelet should not change its node label or taint itself and it should respect the apiserver/etcd storage. But this is another topic. If so, any cluster installer can add the default taint/labels during joining. It would make no disturbance in later kubelet restarts. |
true, so unless the kubelet changes its design we can play around it by adding the taints once and then removing them from the kubelet flags or local config. i must admit i am not a big fan of this solution as it creates more difficulties for us. we can continue doing it in kubeadm with a client. perhaps sooner, but as far as i understand there could still be a short period when the taints are not there yet. IMO this is not a very common issue. ideally users should create the control plane nodes and only then start applying workloads such as daemonsets. |
To summarize,
|
Is this issue resolved now ? |
some update here, this feature: missed 1.31, but once we have it we will have an instance-config.yaml with KubeletConfiguration on each node. i.e. same as the existing patches workaround: this would still need a KEP/design doc and a kubeadm feature gate / so that users can/opt-in out for a few releases. |
Is this a BUG REPORT or FEATURE REQUEST?
Choose one: BUG REPORT
What happened?
Due we apply the control plane taint after the control plane comes up, on a multi-control plane case we can have pod scheduled to this control plane.
What you expected to happen?
Use the kubelet --register-with-taints config instead of handler it on a separate phase.
How to reproduce it (as minimally and precisely as possible)?
Anything else we need to know?
For now, I just use this config but would be nice if kubeadm can handler it.
The text was updated successfully, but these errors were encountered: