Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE]: Support a Nix Derivation based OCI image builder much like Kaniko is for Dockerfiles #6344

Open
mattpolzin opened this issue Aug 2, 2024 · 6 comments

Comments

@mattpolzin
Copy link
Contributor

mattpolzin commented Aug 2, 2024

Feature Request

Background / Motivation

It would be very nice to be able to explore building OCI images via Nix and have that plug into Garden's build/deploy pipeline seamlessly.

I'm not committing to seeing this work through, but I am looking to gauge whether the Garden project would even want such a thing assuming I did find the time to do the implementation myself eventually.

What should the user be able to do?

Given a Nix file that builds a Docker image (not dissimilar to the very simple one I have pasted immediately below), the user can build & deploy the same as they can today given the simple Dockerfile shown below.

Nix:

{ nixpkgs ? import <nixpkgs> {} }:
nixpkgs.dockerTools.buildImage {
  name = "nginx";
  contents = [nixpkgs.nginx];
}

Dockerfile:

FROM nginx

These examples are of course not representative of how Nix derivations or Dockerfiles handle complexity but there are very real differences between the two approaches for any real-world image.

Why do they want to do this? What problem does it solve?

Flexibility to not be tied to Dockerfiles as the source code for OCI Images used in deploys.

Suggested Implementation(s)

I've hacked this together as a proof of concept for myself by reusing most of the Kaniko code. I think the cleanest implementation would indeed look very similar to the Kaniko builder functionality.

The way I've hacked it together is by replacing the Kaniko executor image with the nixos/nix image. Then I swapped out the kanikoCommand for a nix-build command and re-used the dockerfile by simply setting it to the name of a .nix file. Finally, the nix-build command is followed by a nix-shell command that runs a skopeo copy to copy the OCI tarball produced by Nix to the same destination as Kaniko would have pushed the final image to.

On top of my implementation being hacky and lacking tests, it also disregards Kaniko's ability to push layers for caching -- in an analogous Nix implementation, I would add support for caching all of the Nix store paths used to create the Docker image.

How important is this feature for you/your team?

🌹 It’s a nice to have, but nice things are nice 🙂

@mattpolzin
Copy link
Contributor Author

mattpolzin commented Aug 5, 2024

I cleaned things up a tad on my prototype of this functionality, but it's definitely still prototype quality. If anyone wants to check it out or play with it, it's here: main...mattpolzin:garden:remote-nix-builds-exploration

I'll include some files that mostly set up a working Garden project using the new Nix builder. If you build the aforementioned branch and use that Garden executable, the following project works even without Nix installed on your system (after all, the point is that images are built remotely as with Kaniko).

project.garden.yml

apiVersion: garden.io/v1
kind: Project
name: garden-explorations
defaultEnvironment: local
modules:
  include:
    - "*.garden.yml"
environments:
  - name: local
providers:
  - name: kubernetes
    environments: [local]
    context: docker-desktop
    namespace: test
    imagePullSecrets:
      - name: dockerhub-secret
    nix:
      image: docker.io/nixos/nix
    deploymentRegistry:
      hostname: docker.io
      namespace:
    buildMode: nix

nginx.garden.yml

kind: Build
description: nginx container
type: container
name: test-nginx
spec:
  nixfile: nginx.nix

---

kind: Deploy
description: nginx
type: container
name: nginx
build: test-nginx

This Nix file is not trivial, but only because I went for an nginx container that has a simple index.html file packaged in with it. It's actually easier to package e.g. an Elixir service.

nginx.nix

{ nixpkgs ? import {} }:
let inherit (nixpkgs) busybox lib writeText runCommand nginx;
    adduser = "${busybox}/bin/adduser";
 
    files = runCommand "mk-files" {} ''
      mkdir -p $out/share
      echo 'hi' > $out/share/index.html
    '';
    nginxConf = writeText "nginx-conf" ''
      events {}
      http {
        server {
          listen 8080;
 
          location / {
            autoindex on;
            root ${files}/share;
          }
        }
      }
    '';
in
nixpkgs.dockerTools.buildLayeredImage {
  name = "nginx2";
  enableFakechroot = true;
  fakeRootCommands = ''
    mkdir -p /etc
    touch /etc/passwd
    ${adduser} nginx -H
 
    mkdir -p /var/log/nginx
    chown nginx /var/log/nginx
    ln -sf /dev/stdout /var/log/nginx/access.log
    ln -sf /dev/stderr /var/log/nginx/error.log
 
    mkdir -p /tmp
    chown nginx /tmp
  '';
  config = {
    User = "nginx";
    Cmd = [
      "${lib.getExe nginx}"
      "-g"
      "daemon off;"
      "-c"
      "${nginxConf}"
    ];
  };
}

@stefreak
Copy link
Member

stefreak commented Aug 6, 2024

@mattpolzin Amazing work! It sounds like the nix functionality can be a plugin, similar to the jib-container type; See also: https://github.com/garden-io/garden/tree/main/plugins/jib

We would accept a community contribution, but I'd like you to know that we are currently working on something that we want to release in the coming weeks, where it will be super easy to declare your own action types & publish your own custom action types on NPM. If you are interested in that, feel free to join the waitlist here: https://garden.io/grow

@mattpolzin
Copy link
Contributor Author

Thanks for the tip. I’ll see if I can figure out how to adapt the jib plugin example to my needs when I get the chance. One difference between that example and my needs is I see the jib plugin is documented as building locally and then pushing the resulting image whereas an important aspect of my desire to replace Kaniko is the ability to build the images in a remote K8s cluster.

@stefreak
Copy link
Member

stefreak commented Aug 8, 2024

@mattpolzin Ah okay, interesting use case! How come that you need to build the image remotely? Is it to avoid nix itself together with the nix store as a dependency, or for other reasons?

@mattpolzin
Copy link
Contributor Author

Ah, I think you've inadvertently caught me in an exaggeration. I didn't mean to do so, but I overstated this as a need when it's more of a desire.

The most honest answer to why we want building to be done remotely is "because we want a drop in replacement for the Kaniko remote building."

That said, if I think on it for just a minute I do think of some small differences between host and in-cluster building that do benefit my org.

We benefit from freeing up compute resources on our laptops by letting Kaniko build images in-cluster. We also benefit from reduced bandwidth consumption on our employee home networks, speed improvements for certain slower employee upload speeds, and even get a small boost from the fact that we host our own Harbor docker image registry in the same cluster as we run Garden deploys and so request latency between in-cluster Kaniko builders and our docker registry is very low.

It's tempting to point to the reason you stated of not needing Nix installed on a laptop in order to do a Garden deploy, and early on in exploration that would be a true benefit (as I start to swap Dockerfiles for Nix files but everyone else doesn't change their environment at all), but this would be a short lived benefit since I also hope to replace our dev tooling package management over to Nix based as well to get even more bang for my buck out of the derivations that build our services.

@mattpolzin
Copy link
Contributor Author

I haven't had more time to plug into experimenting with any of this but I did have a realization that there actually is one pretty good reason why we would want to build the images remotely instead of locally: to avoid the complexity of cross-compilation. Whether it's just that the dev machine is arm and the server is running x86 or the local machine is running Darwin and the server is running Linux, cross compilation would come up right away for us at my org.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants