Skip to content

Commit

Permalink
Update upgrade.fr.md
Browse files Browse the repository at this point in the history
  • Loading branch information
herveleclerc authored Apr 8, 2024
1 parent 26e9c60 commit ba67e13
Showing 1 changed file with 88 additions and 63 deletions.
151 changes: 88 additions & 63 deletions labs/k8s/upgrade.fr.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,22 @@ Machine : **master**, **worker-0**, **worker-1**
<hr>


Préparation de la mise à jour

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {.zsh .numberLines}
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring-1.28.gpg

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring-1.28.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee -a /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


Pour commencer, il faut mettre à jour kubeadm :

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {.zsh .numberLines}
sudo apt-mark unhold kubeadm
sudo apt-get install kubeadm=1.24.10-00
sudo apt-get install kubeadm=1.28.8-1.1
sudo apt-mark hold kubeadm
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Expand All @@ -36,36 +47,35 @@ Nous pouvons avoir un aperçu de l’upgrade de la façon suivante :
sudo kubeadm upgrade plan


[upgrade/config] Making sure the configuration is correct:
[[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0206 09:34:54.193329 4187 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.23.16
[upgrade/versions] kubeadm version: v1.24.10
I0206 09:34:58.968509 4187 version.go:256] remote version is much newer: v1.26.1; falling back to: stable-1.24
[upgrade/versions] Target version: v1.24.10
[upgrade/versions] Latest version in the v1.23 series: v1.23.16
[upgrade/versions] Cluster version: v1.27.12
[upgrade/versions] kubeadm version: v1.28.8
I0408 06:40:22.060915 4163 version.go:256] remote version is much newer: v1.29.3; falling back to: stable-1.28
[upgrade/versions] Target version: v1.28.8
[upgrade/versions] Latest version in the v1.27 series: v1.27.12

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 3 x v1.23.16 v1.24.10
COMPONENT CURRENT TARGET
kubelet 3 x v1.27.9 v1.28.8

Upgrade to the latest stable version:

COMPONENT CURRENT TARGET
kube-apiserver v1.23.16 v1.24.10
kube-controller-manager v1.23.16 v1.24.10
kube-scheduler v1.23.16 v1.24.10
kube-proxy v1.23.16 v1.24.10
CoreDNS v1.8.6 v1.8.6
etcd 3.5.6-0 3.5.6-0
kube-apiserver v1.27.12 v1.28.8
kube-controller-manager v1.27.12 v1.28.8
kube-scheduler v1.27.12 v1.28.8
kube-proxy v1.27.12 v1.28.8
CoreDNS v1.10.1 v1.10.1
etcd 3.5.9-0 3.5.12-0

You can now apply the upgrade by executing the following command:

kubeadm upgrade apply v1.24.10
kubeadm upgrade apply v1.28.8

_____________________________________________________________________

Expand All @@ -80,9 +90,6 @@ kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________




~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


Expand All @@ -93,70 +100,70 @@ Nous pouvons maintenant upgrade les composants du cluster :

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {.zsh .numberLines}

sudo kubeadm upgrade apply v1.24.10
sudo kubeadm upgrade apply v1.28.8

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~



~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {.zsh .numberLines}

[upgrade/config] Making sure the configuration is correct:
[[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0206 09:37:16.226531 4245 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.24.10"
[upgrade/versions] Cluster version: v1.23.16
[upgrade/versions] kubeadm version: v1.24.10
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/version] You have chosen to change the cluster version to "v1.28.8"
[upgrade/versions] Cluster version: v1.27.12
[upgrade/versions] kubeadm version: v1.28.8
[upgrade] Are you sure you want to proceed? [y/N]: y



[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.24.10" (timeout: 5m0s)...
W0408 06:41:41.559443 4249 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.28.8" (timeout: 5m0s)...
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-08-06-41-48/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests1021044454"
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests2343628394"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-02-06-09-38-09/kube-apiserver.yaml"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-08-06-41-48/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)


.....
.....


[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-02-06-09-38-09/kube-controller-manager.yaml"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-08-06-41-48/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-02-06-09-38-09/kube-scheduler.yaml"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-08-06-41-48/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Removing the deprecated label node-role.kubernetes.io/master='' from all control plane Nodes. After this step only the label node-role.kubernetes.io/control-plane='' will be present on control plane Nodes.
[upgrade/postupgrade] Adding the new taint &Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,} to all control plane Nodes. After this step both taints &Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,} and &Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,} should be present on control plane Nodes.
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config1974182237/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
Expand All @@ -165,7 +172,7 @@ W0206 09:37:16.226531 4245 initconfiguration.go:120] Usage of CRI endpoints w
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.24.10". Enjoy!
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.28.8". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
Expand All @@ -190,7 +197,7 @@ Nous devons maintenant mettre à jour la kubelet et kubectl :
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {.zsh .numberLines}
sudo apt-mark unhold kubectl kubelet
sudo apt-get install kubectl=1.24.10-00 kubelet=1.24.10-00
sudo apt-get install kubectl=1.28.8-1.1 kubelet=1.28.8-1.1
sudo apt-mark hold kubectl kubelet
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand All @@ -207,21 +214,34 @@ Vérification de la mise à jour du **master**
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {.zsh .numberLines}
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready,SchedulingDisabled control-plane 16m v1.24.10
worker-0 Ready <none> 15m v1.23.16
worker-1 Ready <none> 15m v1.23.16
ubuntu@master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 19m v1.28.8
worker-0 Ready <none> 14m v1.27.9
worker-1 Ready <none> 14m v1.27.9
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Nous devons maintenant mettre à jour les workers :
A faire sur les noeud 1 et 2
A faire sur les noeud **worker-0 et worker-1**
Préparation de la mise à jour
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {.zsh .numberLines}
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring-1.28.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring-1.28.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {.zsh .numberLines}
training@worker-0$ sudo apt-mark unhold kubeadm
training@worker-0$ sudo apt-get install kubeadm=1.24.10-00
training@worker-0$ sudo apt-mark hold kubeadm
sudo apt-mark unhold kubeadm
sudo apt-get install kubeadm=1.28.8-1.1
sudo apt-mark hold kubeadm
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Comme pour le master, nous devons drain les noeuds workers :
Expand All @@ -238,16 +258,19 @@ kubectl drain worker-0 --ignore-daemonsets
Nous devons maintenant mettre à jour la configuration de notre worker-0 :
Sur le <font color=red><b>worker-0</b></font>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {.zsh .numberLines}
training@worker-0$ sudo kubeadm upgrade node
sudo kubeadm upgrade node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {.zsh .numberLines}
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config2717758596/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
Expand All @@ -256,20 +279,21 @@ training@worker-0$ sudo kubeadm upgrade node
Enfin, comme pour le master nous devons mettre a jour la kubelet et kubectl :
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {.zsh .numberLines}
training@worker-0$ sudo apt-mark unhold kubectl kubelet
training@worker-0$ sudo apt-get install kubectl=1.24.10-00 kubelet=1.24.10-00
training@worker-0$ sudo apt-mark hold kubectl kubelet
sudo apt-mark unhold kubectl kubelet
sudo apt-get install kubectl=1.28.8-1.1 kubelet=1.28.8-1.1
sudo apt-mark hold kubectl kubelet
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
En prenant soin de redémarrer la kubelet :
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {.zsh .numberLines}
training@worker-0$ sudo systemctl daemon-reload
training@worker-0$ sudo systemctl restart kubelet
sudo systemctl daemon-reload
sudo systemctl restart kubelet
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sans oublier de remettre le noeud en marche :
Sur le <font color=red><b>master</b></font>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {.zsh .numberLines}
kubectl uncordon worker-0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand All @@ -279,13 +303,14 @@ Nous pouvons maintenant lister les noeuds :
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {.zsh .numberLines}
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready,SchedulingDisabled control-plane 16m v1.24.10
worker-0 Ready <none> 15m v1.24.10
worker-1 Ready <none> 15m v1.23.16
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 25m v1.28.8
worker-0 Ready <none> 19m v1.28.8
worker-1 Ready <none> 19m v1.27.9
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Passez à la mise à jour du noeud 2
Passez à la mise à jour du noeud **worker-1**
Et lister les pods pour vérifier que tout est fonctionnel :
Expand Down

0 comments on commit ba67e13

Please sign in to comment.