Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeone doesn't update NO_PROXY and no_proxy in kube-proxy Daemonset and static pods #3310

Open
JKBGIT1 opened this issue Jul 22, 2024 · 6 comments
Labels
kind/discussion sig/cluster-management Denotes a PR or issue as being assigned to SIG Cluster Management.

Comments

@JKBGIT1
Copy link

JKBGIT1 commented Jul 22, 2024

What happened?

Kubeone didn't update the NO_PROXY and no_proxy env variables in static pods and kube-proxy DaemonSet.

I built a cluster with 1 master and 1 worker node. I utilized an HTTP proxy in that process. The cluster was built as expected. Then I added another worker node to the staticWorkers.hosts array and its public + private IPs to the proxy.noProxy attribute. The new node was added to the k8s cluster as expected, however, its public and private IPs weren't added to the NO_PROXY and no_proxy env variables in kube-proxy DaemonSet and the static pods in the cluster.

Expected behavior

The NO_PROXY and no_proxy env variables should be updated every time the user changes a proxy.noProxy configuration in the kubeone YAML file and runs kubeone apply.

How to reproduce the issue?

Create 3 VMs in any Cloud provider. They have to be connected through the private network and have public IPs.

Replace all the <> with the real values. Run kubeone apply -m <path-to-the-below-config> to build a cluster using your HTTP proxy server.

apiVersion: kubeone.k8c.io/v1beta2
kind: KubeOneCluster
name: proxy-test
versions:
  kubernetes: 'v1.27.1'
features:
  coreDNS:
    replicas: 2
    deployPodDisruptionBudget: true
  nodeLocalDNS:
    deploy: false
clusterNetwork:
  cni:
    cilium:
      enableHubble: true
cloudProvider:
  none: {}
  external: false
apiEndpoint:
  host: '<master-public-IP>'
  port: 6443
controlPlane:
  hosts:
  - publicAddress: '<master-public-IP>'
    privateAddress: '<master-private-IP>'
    sshUsername: root
    sshPrivateKeyFile: './key.pem'
    hostname: master
    isLeader: true
    taints:
    - key: "node-role.kubernetes.io/control-plane"
      effect: "NoSchedule"
staticWorkers:
  hosts:
  - publicAddress: '<worker1-public-IP>'
    privateAddress: '<worker1-private-IP>'
    sshPort: 22
    sshUsername: root
    sshPrivateKeyFile: './key.pem'
    hostname: worker1
proxy:
  http: "http://<proxy-url>:<proxy-port>"
  https: "http://<proxy-url>:<proxy-port>"
  noProxy: "svc,<master-private-IP>,<worker1-private-IP>,<master-public-IP>,<worker1-public-IP>"
machineController:
  deploy: false

When the previous command finishes run kubectl describe daemonsets.apps -n kube-system kube-proxy and check the NO_PROXY and no_proxy values in the Environment. They will have <master-private-IP>,<worker1-private-IP>,<master-public-IP>,<worker1-public-IP> at the end, as expected. The same goes for all the static pods (kube-apiserver, kube-scheduler, kube-scheduler).

Take the configuration below, because it adds a new static worker node, and replace <> with the real values again. Then run kubeone apply -m <path-to-the-below-config>.

apiVersion: kubeone.k8c.io/v1beta2
kind: KubeOneCluster
name: proxy-test
versions:
  kubernetes: 'v1.27.1'
features:
  coreDNS:
    replicas: 2
    deployPodDisruptionBudget: true
  nodeLocalDNS:
    deploy: false
clusterNetwork:
  cni:
    cilium:
      enableHubble: true
cloudProvider:
  none: {}
  external: false
apiEndpoint:
  host: '<master-public-IP>'
  port: 6443
controlPlane:
  hosts:
  - publicAddress: '<master-public-IP>'
    privateAddress: '<master-private-IP>'
    sshUsername: root
    sshPrivateKeyFile: './key.pem'
    hostname: master
    isLeader: true
    taints:
    - key: "node-role.kubernetes.io/control-plane"
      effect: "NoSchedule"
staticWorkers:
  hosts:
  - publicAddress: '<worker1-public-IP>'
    privateAddress: '<worker1-private-IP>'
    sshPort: 22
    sshUsername: root
    sshPrivateKeyFile: './key.pem'
    hostname: worker1
  - publicAddress: '<worker2-public-IP>'
    privateAddress: '<worker2-private-IP>'
    sshPort: 22
    sshUsername: root
    sshPrivateKeyFile: './key.pem'
    hostname: worker2
proxy:
  http: "http://<proxy-url>:<proxy-port>"
  https: "http://<proxy-url>:<proxy-port>"
  noProxy: "svc,<master-private-IP>,<worker1-private-IP>,<master-public-IP>,<worker1-public-IP>,<worker2-private-IP>,<worker2-public-IP>"
machineController:
  deploy: false

When the kubeone finishes run kubectl describe daemonsets.apps -n kube-system kube-proxy. You should see that <worker2-private-IP>,<worker2-public-IP> isn't in the NO_PROXY and no_proxy values.

What KubeOne version are you using?

$ kubeone version
{
  "kubeone": {
    "major": "1",
    "minor": "8",
    "gitVersion": "1.8.0",
    "gitCommit": "c280d14d95ac92a27576851cc058fc84562fcc55",
    "gitTreeState": "",
    "buildDate": "2024-05-14T15:41:44Z",
    "goVersion": "go1.22.3",
    "compiler": "gc",
    "platform": "linux/amd64"
  },
  "machine_controller": {
    "major": "1",
    "minor": "59",
    "gitVersion": "v1.59.1",
    "gitCommit": "",
    "gitTreeState": "",
    "buildDate": "",
    "goVersion": "",
    "compiler": "",
    "platform": "linux/amd64"
  }
}

What cloud provider are you running on?

In this example, I spawned the VMs in Azure, but the same goes for Hetzner and AWS. I think it doesn't depend on the Cloud provider.

What operating system are you running in your cluster?

Ubuntu 22.04

Additional information

I use the Squid proxy as an HTTP proxy while building the k8s cluster.

@JKBGIT1 JKBGIT1 added kind/bug Categorizes issue or PR as related to a bug. sig/cluster-management Denotes a PR or issue as being assigned to SIG Cluster Management. labels Jul 22, 2024
@kron4eg
Copy link
Member

kron4eg commented Jul 22, 2024

But we don't set any proxy environment variables in kube-proxy daemonset, nether in static pods.

The only change we do in static pods is we add /etc/ssl/certs volume and SSL_CERT_FILE env to the kube-controller-manager.yaml

@kron4eg kron4eg added kind/question Categorizes issue or PR as a question. kind/discussion and removed kind/bug Categorizes issue or PR as related to a bug. kind/question Categorizes issue or PR as a question. labels Jul 22, 2024
@kron4eg
Copy link
Member

kron4eg commented Jul 22, 2024

But seems like kubeadm is doing this...

@JKBGIT1
Copy link
Author

JKBGIT1 commented Jul 22, 2024

But seems like kubeadm is doing this...

In that case, I suppose I should open an issue in kubeadm. Besides that, is there a chance you can or plan to do something about it?

@kron4eg
Copy link
Member

kron4eg commented Jul 22, 2024

Let us investigate the possibilities. And please link here future kubeadm issue in case you'd create one.

@JKBGIT1
Copy link
Author

JKBGIT1 commented Aug 15, 2024

Hello, I created a new issue in kubeadm repo. kubernetes/kubeadm#3099

Besides that, there is a workaround. You can patch the no proxy env variables in static pods and kube-proxy DaemonSet by

NO_PROXY="<new-no-proxy-list>" no_proxy="<new-no-proxy-list>" kubeadm init phase control-plane all --patches .
NO_PROXY="<new-no-proxy-list>" no_proxy="<new-no-proxy-list>" kubeadm init phase addon kube-proxy

EDIT: I followed kubernetes/kubeadm#2771 (comment)

@kron4eg
Copy link
Member

kron4eg commented Aug 15, 2024

thanks for updating!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/discussion sig/cluster-management Denotes a PR or issue as being assigned to SIG Cluster Management.
Projects
None yet
Development

No branches or pull requests

2 participants