-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support multi-arch builds #566
Conversation
[noissue]
38ab20e
to
df305f1
Compare
podman build --format docker --file images/Containerfile.core.base --tag pulp/base:${TEMP_BASE_TAG} . | ||
podman build --format docker --file images/pulp_ci_centos/Containerfile --tag pulp/pulp-ci-centos:${TEMP_BASE_TAG} --build-arg FROM_TAG=${TEMP_BASE_TAG} . | ||
sudo podman run --rm --privileged multiarch/qemu-user-static --reset -p yes | ||
for ARCH in arm64 amd64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we have different workflows/steps for each arch? So we don't have to use more bash magic here...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not too familiar with GitHub Actions, but we would need to shift the complexity to some kind of fanout and collection (multiple artifacts via podman save
instead of one including ID management, gathering all the images at one place for manifest push in the end).
I don't know whether and/or how that is possible with a single definition of the desired architectures.
So IMO we would just exchange bash magic with Github Actions magic, which I think wouldn't be worth it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@StopMotionCuber I agree, let's stick with this for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me from the index building and pushing perspective. ARM64 and AMD64 images should be part of one OCI index.
podman build --format docker --file images/Containerfile.core.base --tag pulp/base:${TEMP_BASE_TAG} . | ||
podman build --format docker --file images/pulp_ci_centos/Containerfile --tag pulp/pulp-ci-centos:${TEMP_BASE_TAG} --build-arg FROM_TAG=${TEMP_BASE_TAG} . | ||
sudo podman run --rm --privileged multiarch/qemu-user-static --reset -p yes | ||
for ARCH in arm64 amd64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@StopMotionCuber I agree, let's stick with this for now.
Thanks for merging this 🎉 |
Trying a fresh run for supporting multi-arch builds, see #546, #562, #563, #564 and #565 for some reference.
In this run, I've tested the changes end to end, with additional modifications (not present in this PR).
Here you can take a look at the additional modifications that went into my branch (not using dockerhub and quay.io, only ghcr.io, where the path was modified to my personal namespace).
The Action for that was passing.
You can see published packages here. To inspect them, run e.g.
There are issues with the base image, which is only published for
amd64
. I identified the issue, it's that the according Dockerfile is usingFROM docker.io/centos/nginx-116-centos7:1.16
as base. This image is not multi-arch (and therefore neither are their descendants) and has seen the last update two years ago according to DockerHub. Is anyone still using the web images? They don't seem that well maintained.