-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
while create v1.18 cluster, it still pull k8s.gcr.io/pause:3.1 image #1471
Comments
We need update cri to use new |
We really actually need kind & containerd to be in sync here. Older kubernetes versions will not use 3.2 We can probably configure CRI & kubeadm ourselves and auto preload it ourselves |
Do you mean that the user uses the Edit: Or do we directly preload both
|
No we should not load both.
The root problem here is that kubeadm and CRI are not aligned on this.
We should align them to specify the same image.
I think the best approach is probably to teach containers CRI config to us
the image from kubeadm at node build time.
This is going to be brittle though, we can't query that directly afaik :/
A different approach would be always setting everything to a particular
pause image regardless of kubernetes version, but that's suboptimal from an
upstream perspective because now we're not testing the default, which we
generally do when a default exists.
Just upgrading containerd is a partial implementation of the latter
approach.
…On Mon, Apr 13, 2020, 7:33 PM Jintao Zhang ***@***.***> wrote:
We can probably configure CRI & kubeadm ourselves and auto preload it
ourselves
Do you mean that the user uses the pod-infra-container-image
configuration item in the configuration file?
Or do we directly preload both pause: 3.1 and pause: 3.2 ? pause image is
small.
k8s.gcr.io/pause 3.2 80d28bedfe5d 8 weeks ago 683kBk8s.gcr.io/pause 3.1 da86e6ba6ca1 2 years ago 742kB
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#1471 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAHADKZKFORFDE35OAM4EZLRMPDPTANCNFSM4MGWAJ6Q>
.
|
this one is going to be frustrating. there is no non-brittle way to detect this. |
Maybe we can change containerd's config file ? There has a |
Yes, that's known. The problem is knowing which image to use in a non brittle fashion. Also kubeadm does not have an option to specify this. Frankly kubeadm should not be pulling images in kind and the preflights are useless and time wssting in this context. If we finally drop 1.11 and 1.12 k8s we can just skip the preflight and ship whatever image we tell containerd to use. We'd wind up back at square 1 for detecting etcd vs pause though. I think we're going to have to do the awful substring match that totally won't ever bite us 🙄 |
/lifecycle active |
xref: kubernetes/kubeadm#2020 |
AFAICT Per kubernetes/kubeadm#2020 there's no config field for this and they're currently not inclined to add one. We can do the There's no reason for this to be tied to the kubernetes version. We could work around for Kubernetes 1.13+ by skipping preflight entirely (and thus no pulling / we don't care if kubeadm is aware of this, we don't really have much use for preflight anyhow) but we'd still need to ignore it from the |
+1 I'm thinking, for this problem, it is easier to load two images directly than maintaining a brittle logic (although this is a temporary solution) Then we can consider publishing pre-build base-image of different kubernetes version. WDYT? |
That's a ~1MB cop out already though, and we'll eventually need 3 or more.
CRI tools is backwards compatible, it adds functionality. Those versions are just new over time. Adding multi version makes custom base images less manageable (e.g. for another architecture), and then we are still incorrectly managing the pause image, it should be related only to the pod implementation. |
(also the pause image is loaded and node image time so multiple bases is unnecessary) |
I started writing something after thinking about this morning's discussion, but the nod build code is such a mess and I don't want to just insert yet another hack here. I'll get a PR up soon. |
should be fixed by #1505 we may modify the approach in the future, but at least this is good enough for now I hope. |
Thanks! |
What happened:
When I create v1.18 cluster (offline), it faild.
Debug:
containerd image list:
What you expected to happen:
create cluster.
How to reproduce it (as minimally and precisely as possible):
In offline:
Anything else we need to know?:
Environment:
kind version
): kind v0.8.0-alpha+c68a1cf537d680 go1.14.2 linux/amd64kubectl version
):docker info
):/etc/os-release
):The text was updated successfully, but these errors were encountered: