Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with Vagrant / Centos #12

Open
damianoneill opened this issue Feb 14, 2018 · 0 comments
Open

Issue with Vagrant / Centos #12

damianoneill opened this issue Feb 14, 2018 · 0 comments

Comments

@damianoneill
Copy link

Running with CentOS

export OS_IMAGE=centos7

Do a vagrant up, get the following:

==> master1: TASK [kubernetes/master : Init Kubernetes cluster] *****************************
==> master1: fatal: [192.16.35.12]: FAILED! => {"changed": true, "cmd": "kubeadm init --service-cidr 10.96.0.0/12 --kubernetes-version v1.9.1 --pod-network-cidr 10.244.0.0/16 --apiserver-advertise-address 192.16.35.12 --token b0f7b8.8d1767876297d85c ", "delta": "0:01:57.685986", "end": "2018-02-14 14:47:44.580484", "msg": "non-zero return code", "rc": 1, "start": "2018-02-14 14:45:46.894498", "stderr": "\t[WARNING FileExisting-crictl]: crictl not found in system path\ncouldn't initialize a Kubernetes cluster", "stderr_lines": ["\t[WARNING FileExisting-crictl]: crictl not found in system path", "couldn't initialize a Kubernetes cluster"], "stdout": "[init] Using Kubernetes version: v1.9.1\n[init] Using Authorization modes: [Node RBAC]\n[preflight] Running pre-flight checks.\n[preflight] Starting the kubelet service\n[certificates] Generated ca certificate and key.\n[certificates] Generated apiserver certificate and key.\n[certificates] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.16.35.12]\n[certificates] Generated apiserver-kubelet-client certificate and key.\n[certificates] Generated sa key and public key.\n[certificates] Generated front-proxy-ca certificate and key.\n[certificates] Generated front-proxy-client certificate and key.\n[certificates] Valid certificates and keys now exist in \"/etc/kubernetes/pki\"\n[kubeconfig] Wrote KubeConfig file to disk: \"admin.conf\"\n[kubeconfig] Wrote KubeConfig file to disk: \"kubelet.conf\"\n[kubeconfig] Wrote KubeConfig file to disk: \"controller-manager.conf\"\n[kubeconfig] Wrote KubeConfig file to disk: \"scheduler.conf\"\n[controlplane] Wrote Static Pod manifest for component kube-apiserver to \"/etc/kubernetes/manifests/kube-apiserver.yaml\"\n[controlplane] Wrote Static Pod manifest for component kube-controller-manager to \"/etc/kubernetes/manifests/kube-controller-manager.yaml\"\n[controlplane] Wrote Static Pod manifest for component kube-scheduler to \"/etc/kubernetes/manifests/kube-scheduler.yaml\"\n[etcd] Wrote Static Pod manifest for a local etcd instance to \"/etc/kubernetes/manifests/etcd.yaml\"\n[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory \"/etc/kubernetes/manifests\".\n[init] This might take a minute or longer if the control plane images have to be pulled.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.\n\nUnfortunately, an error has occurred:\n\ttimed out waiting for the condition\n\nThis error is likely caused by:\n\t- The kubelet is not running\n\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)\n\t- There is no internet connection, so the kubelet cannot pull the following control plane images:\n\t\t- gcr.io/google_containers/kube-apiserver-amd64:v1.9.1\n\t\t- gcr.io/google_containers/kube-controller-manager-amd64:v1.9.1\n\t\t- gcr.io/google_containers/kube-scheduler-amd64:v1.9.1\n\nIf you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:\n\t- 'systemctl status kubelet'\n\t- 'journalctl -xeu kubelet'", "stdout_lines": ["[init] Using Kubernetes version: v1.9.1", "[init] Using Authorization modes: [Node RBAC]", "[preflight] Running pre-flight checks.", "[preflight] Starting the kubelet service", "[certificates] Generated ca certificate and key.", "[certificates] Generated apiserver certificate and key.", "[certificates] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.16.35.12]", "[certificates] Generated apiserver-kubelet-client certificate and key.", "[certificates] Generated sa key and public key.", "[certificates] Generated front-proxy-ca certificate and key.", "[certificates] Generated front-proxy-client certificate and key.", "[certificates] Valid certificates and keys now exist in \"/etc/kubernetes/pki\"", "[kubeconfig] Wrote KubeConfig file to disk: \"admin.conf\"", "[kubeconfig] Wrote KubeConfig file to disk: \"kubelet.conf\"", "[kubeconfig] Wrote KubeConfig file to disk: \"controller-manager.conf\"", "[kubeconfig] Wrote KubeConfig file to disk: \"scheduler.conf\"", "[controlplane] Wrote Static Pod manifest for component kube-apiserver to \"/etc/kubernetes/manifests/kube-apiserver.yaml\"", "[controlplane] Wrote Static Pod manifest for component kube-controller-manager to \"/etc/kubernetes/manifests/kube-controller-manager.yaml\"", "[controlplane] Wrote Static Pod manifest for component kube-scheduler to \"/etc/kubernetes/manifests/kube-scheduler.yaml\"", "[etcd] Wrote Static Pod manifest for a local etcd instance to \"/etc/kubernetes/manifests/etcd.yaml\"", "[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory \"/etc/kubernetes/manifests\".", "[init] This might take a minute or longer if the control plane images have to be pulled.", "[kubelet-check] It seems like the kubelet isn't running or healthy.", "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.", "[kubelet-check] It seems like the kubelet isn't running or healthy.", "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.", "[kubelet-check] It seems like the kubelet isn't running or healthy.", "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.", "[kubelet-check] It seems like the kubelet isn't running or healthy.", "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.", "[kubelet-check] It seems like the kubelet isn't running or healthy.", "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.", "[kubelet-check] It seems like the kubelet isn't running or healthy.", "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.", "[kubelet-check] It seems like the kubelet isn't running or healthy.", "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.", "[kubelet-check] It seems like the kubelet isn't running or healthy.", "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.", "[kubelet-check] It seems like the kubelet isn't running or healthy.", "[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.", "", "Unfortunately, an error has occurred:", "\ttimed out waiting for the condition", "", "This error is likely caused by:", "\t- The kubelet is not running", "\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)", "\t- There is no internet connection, so the kubelet cannot pull the following control plane images:", "\t\t- gcr.io/google_containers/kube-apiserver-amd64:v1.9.1", "\t\t- gcr.io/google_containers/kube-controller-manager-amd64:v1.9.1", "\t\t- gcr.io/google_containers/kube-scheduler-amd64:v1.9.1", "", "If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:", "\t- 'systemctl status kubelet'", "\t- 'journalctl -xeu kubelet'"]}
==> master1: 	to retry, use: --limit @/vagrant/site.retry
==> master1:
==> master1: PLAY RECAP *********************************************************************
==> master1: 192.16.35.10               : ok=9    changed=5    unreachable=0    failed=0
==> master1: 192.16.35.11               : ok=9    changed=5    unreachable=0    failed=0
==> master1: 192.16.35.12               : ok=18   changed=13   unreachable=0    failed=1
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.

See here for a description of the issue - kubernetes/kubeadm#613

Resolution from Kube is not targetted until 1.10.

Partial fix described in last issue message

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant