-
Notifications
You must be signed in to change notification settings - Fork 275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE]: Support a Nix Derivation based OCI image builder much like Kaniko is for Dockerfiles #6344
Comments
I cleaned things up a tad on my prototype of this functionality, but it's definitely still prototype quality. If anyone wants to check it out or play with it, it's here: main...mattpolzin:garden:remote-nix-builds-exploration I'll include some files that mostly set up a working Garden project using the new Nix builder. If you build the aforementioned branch and use that Garden executable, the following project works even without Nix installed on your system (after all, the point is that images are built remotely as with Kaniko). project.garden.yml apiVersion: garden.io/v1
kind: Project
name: garden-explorations
defaultEnvironment: local
modules:
include:
- "*.garden.yml"
environments:
- name: local
providers:
- name: kubernetes
environments: [local]
context: docker-desktop
namespace: test
imagePullSecrets:
- name: dockerhub-secret
nix:
image: docker.io/nixos/nix
deploymentRegistry:
hostname: docker.io
namespace:
buildMode: nix nginx.garden.yml kind: Build
description: nginx container
type: container
name: test-nginx
spec:
nixfile: nginx.nix
---
kind: Deploy
description: nginx
type: container
name: nginx
build: test-nginx This Nix file is not trivial, but only because I went for an nginx.nix { nixpkgs ? import {} }:
let inherit (nixpkgs) busybox lib writeText runCommand nginx;
adduser = "${busybox}/bin/adduser";
files = runCommand "mk-files" {} ''
mkdir -p $out/share
echo 'hi' > $out/share/index.html
'';
nginxConf = writeText "nginx-conf" ''
events {}
http {
server {
listen 8080;
location / {
autoindex on;
root ${files}/share;
}
}
}
'';
in
nixpkgs.dockerTools.buildLayeredImage {
name = "nginx2";
enableFakechroot = true;
fakeRootCommands = ''
mkdir -p /etc
touch /etc/passwd
${adduser} nginx -H
mkdir -p /var/log/nginx
chown nginx /var/log/nginx
ln -sf /dev/stdout /var/log/nginx/access.log
ln -sf /dev/stderr /var/log/nginx/error.log
mkdir -p /tmp
chown nginx /tmp
'';
config = {
User = "nginx";
Cmd = [
"${lib.getExe nginx}"
"-g"
"daemon off;"
"-c"
"${nginxConf}"
];
};
} |
@mattpolzin Amazing work! It sounds like the nix functionality can be a plugin, similar to the We would accept a community contribution, but I'd like you to know that we are currently working on something that we want to release in the coming weeks, where it will be super easy to declare your own action types & publish your own custom action types on NPM. If you are interested in that, feel free to join the waitlist here: https://garden.io/grow |
Thanks for the tip. I’ll see if I can figure out how to adapt the jib plugin example to my needs when I get the chance. One difference between that example and my needs is I see the jib plugin is documented as building locally and then pushing the resulting image whereas an important aspect of my desire to replace Kaniko is the ability to build the images in a remote K8s cluster. |
@mattpolzin Ah okay, interesting use case! How come that you need to build the image remotely? Is it to avoid nix itself together with the nix store as a dependency, or for other reasons? |
Ah, I think you've inadvertently caught me in an exaggeration. I didn't mean to do so, but I overstated this as a need when it's more of a desire. The most honest answer to why we want building to be done remotely is "because we want a drop in replacement for the Kaniko remote building." That said, if I think on it for just a minute I do think of some small differences between host and in-cluster building that do benefit my org. We benefit from freeing up compute resources on our laptops by letting Kaniko build images in-cluster. We also benefit from reduced bandwidth consumption on our employee home networks, speed improvements for certain slower employee upload speeds, and even get a small boost from the fact that we host our own Harbor docker image registry in the same cluster as we run Garden deploys and so request latency between in-cluster Kaniko builders and our docker registry is very low. It's tempting to point to the reason you stated of not needing Nix installed on a laptop in order to do a Garden deploy, and early on in exploration that would be a true benefit (as I start to swap Dockerfiles for Nix files but everyone else doesn't change their environment at all), but this would be a short lived benefit since I also hope to replace our dev tooling package management over to Nix based as well to get even more bang for my buck out of the derivations that build our services. |
I haven't had more time to plug into experimenting with any of this but I did have a realization that there actually is one pretty good reason why we would want to build the images remotely instead of locally: to avoid the complexity of cross-compilation. Whether it's just that the dev machine is arm and the server is running x86 or the local machine is running Darwin and the server is running Linux, cross compilation would come up right away for us at my org. |
Feature Request
Background / Motivation
It would be very nice to be able to explore building OCI images via Nix and have that plug into Garden's build/deploy pipeline seamlessly.
I'm not committing to seeing this work through, but I am looking to gauge whether the Garden project would even want such a thing assuming I did find the time to do the implementation myself eventually.
What should the user be able to do?
Given a Nix file that builds a Docker image (not dissimilar to the very simple one I have pasted immediately below), the user can build & deploy the same as they can today given the simple Dockerfile shown below.
Nix:
Dockerfile:
FROM nginx
These examples are of course not representative of how Nix derivations or Dockerfiles handle complexity but there are very real differences between the two approaches for any real-world image.
Why do they want to do this? What problem does it solve?
Flexibility to not be tied to Dockerfiles as the source code for OCI Images used in deploys.
Suggested Implementation(s)
I've hacked this together as a proof of concept for myself by reusing most of the Kaniko code. I think the cleanest implementation would indeed look very similar to the Kaniko builder functionality.
The way I've hacked it together is by replacing the Kaniko executor image with the
nixos/nix
image. Then I swapped out thekanikoCommand
for anix-build
command and re-used thedockerfile
by simply setting it to the name of a.nix
file. Finally, thenix-build
command is followed by anix-shell
command that runs askopeo copy
to copy the OCI tarball produced by Nix to the same destination as Kaniko would have pushed the final image to.On top of my implementation being hacky and lacking tests, it also disregards Kaniko's ability to push layers for caching -- in an analogous Nix implementation, I would add support for caching all of the Nix store paths used to create the Docker image.
How important is this feature for you/your team?
🌹 It’s a nice to have, but nice things are nice 🙂
The text was updated successfully, but these errors were encountered: