Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeadm join doesn't update NO_PROXY and no_proxy env variables #3099

Closed
JKBGIT1 opened this issue Aug 15, 2024 · 3 comments
Closed

kubeadm join doesn't update NO_PROXY and no_proxy env variables #3099

JKBGIT1 opened this issue Aug 15, 2024 · 3 comments
Assignees
Labels
kind/support Categorizes issue or PR as a support question. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Milestone

Comments

@JKBGIT1
Copy link

JKBGIT1 commented Aug 15, 2024

Is this a BUG REPORT or FEATURE REQUEST?

FEATURE REQUEST

Versions

kubeadm version (use kubeadm version):

kubeadm version: &version.Info{Major:"1", Minor:"30", GitVersion:"v1.30.4", GitCommit:"a51b3b711150f57ffc1f526a640ec058514ed596", GitTreeState:"clean", BuildDate:"2024-08-14T19:02:46Z", GoVersion:"go1.22.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Kubernetes version (use kubectl version):
Client Version: v1.30.4
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.4
  • Cloud provider or hardware configuration: Azure
  • OS (e.g. from /etc/os-release):
PRETTY_NAME="Ubuntu 22.04.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
  • Kernel (e.g. uname -a):
Linux proxy-test 6.5.0-1025-azure #26~22.04.1-Ubuntu SMP Thu Jul 11 22:33:04 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
  • Container runtime (CRI) (e.g. containerd, cri-o): containerd v1.7.20
  • Container networking plugin (CNI) (e.g. Calico, Cilium): Cilium v1.16.0

What happened?

kubeadm join doesn't update NO_PROXY and no_proxy environment variables in kube-proxy DaemonSet and static pods when joining a new node.

What you expected to happen?

running kubeadm join will update NO_PROXY and no_proxy environment variables in kube-proxy DaemonSet and static pods according to the values in systemctl show-environemnt or session env variables.

How to reproduce it (as minimally and precisely as possible)?

  1. Create 2 VMs in a cloud provider of your choice (I used Azure).

  2. Set up those VMs according to the documentation. That includes:

  • Turn off the swap (docs).
  • Allow IPv4 packet forwarding (docs).
  • Install and configure containerd according to the docs. I installed the containerd.io 1.7.20 using .deb package. Then generated a new /etc/containerd/config.toml by running containerd config default > /etc/containerd/config.toml, and eventually allowing SystemdCgroup in this configuration.
  • Install kubeadm, kubelet, and kubectl (docs).
  1. SSH into the master node and setup proxy env variables running the following commands (replace <> with valid values).
export HTTP_PROXY=http://<proxy-URL>:<proxy-port>
export HTTPS_PROXY=http://<proxy-URL>:<proxy-port>
export NO_PROXY=127.0.0.1/8,localhost,cluster.local,<pod-CIDR>,<service-CIDR>,svc,<master-node-private-IP>,<master-node-public-IP>
export http_proxy=http://<proxy-URL>:<proxy-port>
export https_proxy=http://<proxy-URL>:<proxy-port>
export no_proxy=127.0.0.1/8,localhost,cluster.local,<pod-CIDR>,<service-CIDR>,svc,<master-node-private-IP>,<master-node-public-IP>
systemctl set-environment HTTP_PROXY=http://<proxy-URL>:<proxy-port> HTTPS_PROXY=http://<proxy-URL>:<proxy-port> NO_PROXY=127.0.0.1/8,localhost,cluster.local,<pod-CIDR>,<service-CIDR>,svc,<master-node-private-IP>,<master-node-public-IP>
http_proxy=http://<proxy-URL>:<proxy-port> https_proxy=http://<proxy-URL>:<proxy-port> 
no_proxy=127.0.0.1/8,localhost,cluster.local,<pod-CIDR>,<service-CIDR>,svc,<master-node-private-IP>,<master-node-public-IP>
  1. Run systemctl restart containerd

  2. Run kubeadm init

  3. Run the following commands to set up default kubeconfig:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
  1. Install helm v3.15.3 following the docs

  2. Add cilium repo by running helm repo add cilium https://helm.cilium.io/

  3. Install Cilium v1.16.0 by running helm install cilium cilium/cilium --version 1.16.0 --namespace kube-system

  4. Make sure all pods (besides 1 cilium-operator that will be Pending) are up and healthy.

  5. Check the NO_PROXY and no_proxy environment variables in static pods and kube-proxy DaemonSet. They should correspond to the values set in the third step.

  6. SSH into the worker node and set the proxy env variables running the commands below. We added worker node public and private IPs in the NO_PROXY and no_proxy variables (replace <> with valid values).

export HTTP_PROXY=http://<proxy-URL>:<proxy-port>
export HTTPS_PROXY=http://<proxy-URL>:<proxy-port>
export NO_PROXY=127.0.0.1/8,localhost,cluster.local,<pod-CIDR>,<service-CIDR>,svc,<master-node-private-IP>,<master-node-public-IP>,<worker-node-private-IP>,<worker-node-public-IP>
export http_proxy=http://<proxy-URL>:<proxy-port>
export https_proxy=http://<proxy-URL>:<proxy-port>
export no_proxy=127.0.0.1/8,localhost,cluster.local,<pod-CIDR>,<service-CIDR>,svc,<master-node-private-IP>,<master-node-public-IP>,<worker-node-private-IP>,<worker-node-public-IP>
systemctl set-environment HTTP_PROXY=http://<proxy-URL>:<proxy-port> HTTPS_PROXY=http://<proxy-URL>:<proxy-port> NO_PROXY=127.0.0.1/8,localhost,cluster.local,<pod-CIDR>,<service-CIDR>,svc,<master-node-private-IP>,<master-node-public-IP>,<worker-node-private-IP>,<worker-node-public-IP>
http_proxy=http://<proxy-URL>:<proxy-port> https_proxy=http://<proxy-URL>:<proxy-port> no_proxy=127.0.0.1/8,localhost,cluster.local,<pod-CIDR>,<service-CIDR>,svc,<master-node-private-IP>,<master-node-public-IP>,<worker-node-private-IP>,<worker-node-public-IP>
  1. Run systemctl restart containerd on the worker node.
  2. User kubeadm join to connect the worker node to the cluster.
  3. Check NO_PROXY and no_proxy environment variables in the static pods and kube-proxy DaemonSet. You won’t see the worker's public and private IP there, because these env variables weren’t updated.
@neolit123 neolit123 added this to the v1.30 milestone Aug 15, 2024
@neolit123 neolit123 self-assigned this Aug 15, 2024
@neolit123 neolit123 added kind/bug Categorizes issue or PR as related to a bug. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Aug 15, 2024
@neolit123
Copy link
Member

neolit123 commented Aug 15, 2024

i called sudo NO_PROXY=foo.test kubeadm init ...

and that resulted in:

$ sudo cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep NO_PROXY -C 1
    env:
    - name: NO_PROXY
      value: foo.test

$ k get ds kube-proxy -n kube-system -o yaml | grep NO_PROXY -C 1
              fieldPath: spec.nodeName
        - name: NO_PROXY
          value: foo.test
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"30+", GitVersion:"v1.30.4-dirty", GitCommit:"a51b3b711150f57ffc1f526a640ec058514ed596", GitTreeState:"dirty", BuildDate:"2024-08-15T13:43:23Z", GoVersion:"go1.22.5", Compiler:"gc", Platform:"linux/amd64"}

this works as expected the kube-proxy DS is created only on init. it has the proxy env as expected.

i also tried join

$ sudo NO_PROXY=foo.test.join kubeadm join ...
...
$ sudo cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep NO_PROXY -C 1
    env:
    - name: NO_PROXY
      value: foo.test.join

kubeadm join doesn't update NO_PROXY and no_proxy environment variables in kube-proxy DaemonSet and static pods when joining a new node.

why do you expect kubeadm join to modify the NO_PROXY env vars on the kube-proxy DS?
that should not happen, as the kube-proxy DS is only created on init and upgrade on upgrade. join is not supposed to mutate it.

@neolit123 neolit123 added kind/support Categorizes issue or PR as a support question. and removed kind/bug Categorizes issue or PR as related to a bug. labels Aug 15, 2024
@JKBGIT1
Copy link
Author

JKBGIT1 commented Aug 16, 2024

Hello @neolit123 , thanks for the answer.

why do you expect kubeadm join to modify the NO_PROXY env vars on the kube-proxy DS?
that should not happen, as the kube-proxy DS is only created on init and upgrade on upgrade. join is not supposed to mutate it.

Based on your questions I would assume nodes' private and public IPs are useless in the NO_PROXY and no_proxy env variables in the kube-proxy DaemonSet, and the only important values for this DS are 127.0.0.1/8,localhost,cluster.local,<pod-CIDR>,<service-CIDR>,svc,. If that's the case, I don't think I'll need kubeadm join to update no proxy env variables in the kube-proxy DS.

$ sudo NO_PROXY=foo.test.join kubeadm join ...

I must have done something wrong because I tried this cmd before opening the issue, but it didn't work. Anyway, I plan to try it once again. Thanks!

@neolit123
Copy link
Member

neolit123 commented Aug 16, 2024

kubeadm accepts a kubeproxyconfiguration on kubeadm init but other than that and the *_proxy env vars from the kubeadm init host, there is no way to configure the component. if you need more customizations for kube-proxy you could call kubeadm init --skip-phases=addon/kube-proxy and deploy your own kube-proxy daemonset or manage it with e.g. systemd on each host.

closing: working as intended.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests

2 participants