1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
|
mao@k8s-control-plane-02:~$ kubectl drain --ignore-daemonsets k8s-control-plane-0
2
node/k8s-control-plane-02 cordoned
Warning: ignoring DaemonSet-managed Pods: calico-system/calico-node-26sbk, calico-system/csi-node-driver-cljz8, kube-system/kube-proxy-xkvj7, metallb-system/speaker-cjz7j
evicting pod calico-system/calico-typha-5579b889c8-kbqk7
evicting pod calico-apiserver/calico-apiserver-5f78767767-89z5t
pod/calico-apiserver-5f78767767-89z5t evicted
pod/calico-typha-5579b889c8-kbqk7 evicted
node/k8s-control-plane-02 drained
mao@k8s-control-plane-02:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-control-plane-01 Ready control-plane 42d v1.30.3
k8s-control-plane-02 Ready,SchedulingDisabled control-plane 42d v1.30.2
k8s-control-plane-03 Ready control-plane 42d v1.30.2
k8s-worker-01 Ready <none> 42d v1.30.2
k8s-worker-02 Ready <none> 42d v1.30.2
mao@k8s-control-plane-02:~$ sudo apt update
[sudo] password for mao:
Hit:1 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.30/deb InRelease
Hit:2 http://jp.archive.ubuntu.com/ubuntu noble InRelease
Hit:3 http://security.ubuntu.com/ubuntu noble-security InRelease
Get:4 http://jp.archive.ubuntu.com/ubuntu noble-updates InRelease [126 kB]
Hit:5 http://jp.archive.ubuntu.com/ubuntu noble-backports InRelease
Get:6 http://jp.archive.ubuntu.com/ubuntu noble-updates/main amd64 Packages [344 kB]
Get:7 http://jp.archive.ubuntu.com/ubuntu noble-updates/universe amd64 Packages [321 kB]
Fetched 791 kB in 3s (298 kB/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
31 packages can be upgraded. Run 'apt list --upgradable' to see them.
mao@k8s-control-plane-02:~$ sudo apt-mark unhold kubeadm && \
sudo apt-get update && sudo apt-get install -y kubeadm='1.30.3-*' && \
sudo apt-mark hold kubeadm
Canceled hold on kubeadm.
Hit:1 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.30/deb InRelease
Hit:2 http://jp.archive.ubuntu.com/ubuntu noble InRelease
Hit:3 http://security.ubuntu.com/ubuntu noble-security InRelease
Hit:4 http://jp.archive.ubuntu.com/ubuntu noble-updates InRelease
Hit:5 http://jp.archive.ubuntu.com/ubuntu noble-backports InRelease
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Selected version '1.30.3-1.1' (isv:kubernetes:core:stable:v1.30:pkgs.k8s.io [amd64]) for 'kubeadm'
The following packages will be upgraded:
kubeadm
1 upgraded, 0 newly installed, 0 to remove and 30 not upgraded.
Need to get 10.4 MB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.30/deb kubeadm 1.30.3-1.1 [10.4 MB]
Fetched 10.4 MB in 0s (30.5 MB/s)
debconf: delaying package configuration, since apt-utils is not installed
(Reading database ... 110849 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.30.3-1.1_amd64.deb ...
Unpacking kubeadm (1.30.3-1.1) over (1.30.2-1.1) ...
Setting up kubeadm (1.30.3-1.1) ...
Scanning processes...
Scanning candidates...
Scanning linux images...
Pending kernel upgrade!
Running kernel version:
6.8.0-36-generic
Diagnostics:
The currently running kernel version is not the expected kernel version
6.8.0-40-generic.
Restarting the system to load the new kernel will not be handled automatically,
so you should consider rebooting.
Restarting services...
Service restarts being deferred:
systemctl restart systemd-logind.service
systemctl restart unattended-upgrades.service
No containers need to be restarted.
No user sessions are running outdated binaries.
No VM guests are running outdated hypervisor (qemu) binaries on this host.
kubeadm set on hold.
mao@k8s-control-plane-02:~$ sudo apt-mark unhold kubelet kubectl && \
sudo apt-get update && sudo apt-get install -y kubelet='1.30.3-*' kubectl='1.30.3-*' && \
sudo apt-mark hold kubelet kubectl
Canceled hold on kubelet.
Canceled hold on kubectl.
Hit:1 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.30/deb InRelease
Hit:2 http://jp.archive.ubuntu.com/ubuntu noble InRelease
Hit:3 http://jp.archive.ubuntu.com/ubuntu noble-updates InRelease
Hit:4 http://security.ubuntu.com/ubuntu noble-security InRelease
Hit:5 http://jp.archive.ubuntu.com/ubuntu noble-backports InRelease
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Selected version '1.30.3-1.1' (isv:kubernetes:core:stable:v1.30:pkgs.k8s.io [amd64]) for 'kubelet'
Selected version '1.30.3-1.1' (isv:kubernetes:core:stable:v1.30:pkgs.k8s.io [amd64]) for 'kubectl'
The following packages will be upgraded:
kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 28 not upgraded.
Need to get 28.9 MB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.30/deb kubectl 1.30.3-1.1 [10.8 MB]
Get:2 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.30/deb kubelet 1.30.3-1.1 [18.1 MB]
Fetched 28.9 MB in 1s (55.7 MB/s)
debconf: delaying package configuration, since apt-utils is not installed
(Reading database ... 110849 files and directories currently installed.)
Preparing to unpack .../kubectl_1.30.3-1.1_amd64.deb ...
Unpacking kubectl (1.30.3-1.1) over (1.30.2-1.1) ...
Preparing to unpack .../kubelet_1.30.3-1.1_amd64.deb ...
Unpacking kubelet (1.30.3-1.1) over (1.30.2-1.1) ...
Setting up kubectl (1.30.3-1.1) ...
Setting up kubelet (1.30.3-1.1) ...
Scanning processes...
Scanning candidates...
Scanning linux images...
Pending kernel upgrade!
Running kernel version:
6.8.0-36-generic
Diagnostics:
The currently running kernel version is not the expected kernel version
6.8.0-40-generic.
Restarting the system to load the new kernel will not be handled automatically,
so you should consider rebooting.
Restarting services...
systemctl restart kubelet.service
Service restarts being deferred:
systemctl restart systemd-logind.service
systemctl restart unattended-upgrades.service
No containers need to be restarted.
No user sessions are running outdated binaries.
No VM guests are running outdated hypervisor (qemu) binaries on this host.
kubelet set on hold.
kubectl set on hold.
mao@k8s-control-plane-02:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"30", GitVersion:"v1.30.3", GitCommit:"6fc0a69044f1ac4c13841ec4391224a2df241460", GitTreeState:"clean", BuildDate:"2024-07-16T23:53:15Z", GoVersion:"go1.22.5", Compiler:"gc", Platform:"linux/amd64"}
mao@k8s-control-plane-02:~$ sudo kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
W0811 23:15:18.916628 16854 compute.go:93] Different API server versions in the cluster were discovered: v1.30.3 on nodes [k8s-control-plane-01], v1.30.2 on nodes [k8s-control-plane-02 k8s-control-plane-03]. Please upgrade your control plane nodes to the same version of Kubernetes
[upgrade/versions] Cluster version: 1.30.3
[upgrade/versions] kubeadm version: v1.30.3
[upgrade/versions] Target version: v1.30.3
[upgrade/versions] Latest version in the v1.30 series: v1.30.3
mao@k8s-control-plane-02:~$ sudo kubeadm upgrade apply v1.30.3
[preflight] Running pre-flight checks.
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.30.3"
[upgrade/versions] Cluster version: v1.30.2
[upgrade/versions] kubeadm version: v1.30.3
[upgrade] Are you sure you want to proceed? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.30.3" (timeout: 5m0s)...
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-08-11-23-15-44/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 3 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests3449985093"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-08-11-23-15-44/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-08-11-23-15-44/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-08-11-23-15-44/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config534750710/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[upgrade/addons] skip upgrade addons because control plane instances [k8s-control-plane-03] have not been upgraded
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.30.3". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
mao@k8s-control-plane-02:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-control-plane-01 Ready control-plane 42d v1.30.3
k8s-control-plane-02 Ready,SchedulingDisabled control-plane 42d v1.30.3
k8s-control-plane-03 Ready control-plane 42d v1.30.2
k8s-worker-01 Ready <none> 42d v1.30.2
k8s-worker-02 Ready <none> 42d v1.30.2
mao@k8s-control-plane-02:~$ kubectl uncordon k8s-control-plane-02
node/k8s-control-plane-02 uncordoned
mao@k8s-control-plane-02:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-control-plane-01 Ready control-plane 42d v1.30.3
k8s-control-plane-02 Ready control-plane 42d v1.30.3
k8s-control-plane-03 Ready control-plane 42d v1.30.2
k8s-worker-01 Ready <none> 42d v1.30.2
k8s-worker-02 Ready <none> 42d v1.30.2
mao@k8s-control-plane-02:~$
|