Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proxy for accessories #981

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

igor-alexandrov
Copy link
Contributor

@igor-alexandrov igor-alexandrov commented Sep 26, 2024

This PR adds functionality similar to what Traefik provided Host middleware for accessories.
Some accessories like PGHero require to be deployed to kamal-proxy to provide web interface.

This PR also includes changes from #988.

@igor-alexandrov igor-alexandrov changed the title Proxy accessories Proxy for accessories Sep 28, 2024
@igor-alexandrov igor-alexandrov marked this pull request as ready for review September 28, 2024 17:09
@djmb
Copy link
Collaborator

djmb commented Sep 30, 2024

I don't think we want to start supporting accessories with kamal - if there are two things that require proper Kamal deployments, we could have two separate sets of deploy files.

@igor-alexandrov
Copy link
Contributor Author

I didn't think about having a separate deploy file. I will check and come back to you.

@luizkowalski
Copy link

just to add to the convo: I'm running a couple of accessories on web role, like Grafana and Umami. This would be a great addition! Right now it works smoothly with Traefik, I just add some labels and move on

@igor-alexandrov
Copy link
Contributor Author

@luizkowalski I also run Grafana and PgHero. As a workaround for now, you can deploy manually after the accessory was started.

docker exec 3a34b6c08923 kamal-proxy deploy onetribe-pghero --target c67f2259dce6:8080 --host pghero.onetribe.team --health-check-path /

@luizkowalski
Copy link

which container is 3a34b6c08923 exactly? I assume c67f2259dce6 would be pghero

@igor-alexandrov
Copy link
Contributor Author

Sorry it wasn't clear, so

  1. 3a34b6c08923 is a container with kamal-proxy
  2. c67f2259dce6 is a PgHero container
  3. The command should be run from the host machine

@luizkowalski
Copy link

I can't do it for some reason:

kamal@web:~$ docker exec bcb17df9ad18 kamal-proxy deploy sumiu-umami --target sumiu-umami:3000 --host msg.sumiu.link --health-check-path /
Error: target failed to become healthy

and the logs are just

2024-09-30T20:41:18.620448260Z {"time":"2024-09-30T20:41:18.619622857Z","level":"INFO","msg":"Healthcheck failed","status":307}
2024-09-30T20:41:18.620726703Z {"time":"2024-09-30T20:41:18.619706895Z","level":"INFO","msg":"Target health updated","target":"sumiu-umami:3000","success":false,"state":"adding"}
2024-09-30T20:41:19.621417953Z {"time":"2024-09-30T20:41:19.621048761Z","level":"INFO","msg":"Healthcheck failed","status":307}
2024-09-30T20:41:19.621483430Z {"time":"2024-09-30T20:41:19.62108796Z","level":"INFO","msg":"Target health updated","target":"sumiu-umami:3000","success":false,"state":"adding"}
2024-09-30T20:41:20.626210209Z {"time":"2024-09-30T20:41:20.625598445Z","level":"INFO","msg":"Healthcheck failed","status":307}
2024-09-30T20:41:20.626277910Z {"time":"2024-09-30T20:41:20.625647628Z","level":"INFO","msg":"Target health updated","target":"sumiu-umami:3000","success":false,"state":"adding"}
2024-09-30T20:41:21.617063999Z {"time":"2024-09-30T20:41:21.616750402Z","level":"INFO","msg":"Healthcheck failed","status":307}
2024-09-30T20:41:21.617101013Z {"time":"2024-09-30T20:41:21.616789156Z","level":"INFO","msg":"Target health updated","target":"sumiu-umami:3000","success":false,"state":"adding"}
2024-09-30T20:41:22.620533515Z {"time":"2024-09-30T20:41:22.620073727Z","level":"INFO","msg":"Healthcheck failed","status":307}

I can curl "sumiu-umami" from any other container tho

@igor-alexandrov
Copy link
Contributor Author

Maybe you are using the wrong health check path?

@luizkowalski
Copy link

yup, that was it. I was pointing to / but it actually does not respond with 200.
looking forward for this PR to get merged

@djmb
Copy link
Collaborator

djmb commented Oct 1, 2024

@igor-alexandrov, @luizkowalski - what were you using Traefik for with those accessories, compared to just publishing them directly?

You can run Traefik as an accessory as well. You can either run it in front of everything if you want it on ports 80/443, so you could just stand it up an different ports and configure the accessories to use it (see https://kamal-deploy.org/docs/upgrading/continuing-to-use-traefik/ for more details).

@igor-alexandrov
Copy link
Contributor Author

@djmb Donal, having PgHero and probably other monitoring dashboards as accessories is easier compared to deploying them as a separate application (however I think this is possible too).

So the question is, why the database is an accessory, but a monitoring tool for this database should be a separate application?

@luizkowalski
Copy link

luizkowalski commented Oct 1, 2024

were you using Traefik for with those accessories, compared to just publishing them directly?

I deployed them to web role and put them on a subdomain:

umami:
  image: umamisoftware/umami:postgresql-v2.13.2
  hosts:
    - web
  env:
    clear:
      DATABASE_URL: <%= ENV["UMAMI_DATABASE_URL"] %>
      APP_SECRET: <%= ENV["UMAMI_SECRET_KEY_BASE"] %>
      DATABASE_TYPE: postgresql
  labels:
    traefik.http.routers.umami.rule: Host(`msg.domain.com`)
    traefik.http.routers.umami.service: sumiu-umami@docker
    traefik.http.routers.umami.tls: true

it is incredibly helpful

can run Traefik as an accessory as well

I know but not only feels like a hack, I'd need to add one more accessory and maintain it. would be much easier if this behavior was native

@luizkowalski
Copy link

ah one more thing i remembered: the "args" option is not available for accessories so all the configurations im currently doing for traefik would have to be clumped under "cmd" and i found it difficult to do it

@luizkowalski
Copy link

been giving a try with Traefik but I can't tell if this is something on my end or the changes on v2 are not compatible with the previous way.

First, I started with setting proxy: false and configuring the pre-deploy hook with the instructions from the upgrade guide.

for my web app, I have the following configuration:

servers:
  web:
    hosts:
      - web
    labels:
      traefik.enable: true
      traefik.http.routers.sumiu.rule: Host(`domain.com`)
      traefik.http.routers.sumiu.entrypoints: websecure
      traefik.http.routers.sumiu.tls.certresolver: letsencrypt

      traefik.http.routers.sumiu-metrics.rule: Host(`domain.com`) && PathPrefix(`/metrics`)
      traefik.http.routers.sumiu-metrics.service: sumiu-web@docker
      traefik.http.routers.sumiu-metrics.tls: true
      traefik.http.routers.sumiu-metrics.middlewares: metrics-auth
      traefik.http.middlewares.metrics-auth.basicauth.users: admin:$apr1$5QeDhE9N$wfLTmPYcFmoHiFbmv.RC1/

previously, the service name used to be stable across deployment, so sumiu-web would definitely work. now it doesn't because the service name is something like sumiu-web-58692e90c3e42f6a41490b555092ad2c804dcac1-uncommitted-eb44aaa2018ef063@docker so it is impossible to predict the service name for the service under sumiu-metrics here. This is not the case for other services:

image

I noticed also one more problem: downtime between deploys. (Can't tell why is this happening, maybe it has to do with the proxy being disabled and there are not healthchecks?)

This is the output of the deployment during a downtime
  INFO [77859a44] Running docker login -u [REDACTED] -p [REDACTED] on web
  INFO [77859a44] Finished in 1.010 seconds with exit status 0 (successful).
  INFO [fcd6be6a] Running docker inspect kamal-proxy --format '{{.Config.Image}}' | cut -d: -f2 on web
  INFO [fcd6be6a] Finished in 0.167 seconds with exit status 0 (successful).
  INFO [90e2e19e] Running docker container start kamal-proxy || docker run --name kamal-proxy --network kamal --detach --restart unless-stopped --volume kamal-proxy-config:/home/kamal-proxy/.config/kamal-proxy $(cat .kamal/proxy/options || echo "--publish 80:80 --publish 443:443") basecamp/kamal-proxy:v0.6.0 on web
  INFO [90e2e19e] Finished in 0.159 seconds with exit status 0 (successful).
Detect stale containers...
  INFO [d3a0bf00] Running docker ps --filter label=service=sumiu --filter label=role=web --format "{{.Names}}" | while read line; do echo ${line#sumiu-web-}; done on web
  INFO [d3a0bf00] Finished in 0.154 seconds with exit status 0 (successful).
  INFO [b8dba2ed] Running /usr/bin/env sh -c 'docker ps --latest --format '\''{{.Names}}'\'' --filter label=service=sumiu --filter label=role=web --filter status=running --filter status=restarting --filter ancestor=$(docker image ls --filter reference=luizkowalski/sumiu-app:latest --format '\''{{.ID}}'\'') ; docker ps --latest --format '\''{{.Names}}'\'' --filter label=service=sumiu --filter label=role=web --filter status=running --filter status=restarting' | head -1 | while read line; do echo ${line#sumiu-web-}; done on web
  INFO [b8dba2ed] Finished in 0.232 seconds with exit status 0 (successful).
  INFO [c89cfa32] Running docker ps --filter label=service=sumiu --filter label=role=job --format "{{.Names}}" | while read line; do echo ${line#sumiu-job-}; done on web
  INFO [c89cfa32] Finished in 0.166 seconds with exit status 0 (successful).
  INFO [ff0a00e4] Running /usr/bin/env sh -c 'docker ps --latest --format '\''{{.Names}}'\'' --filter label=service=sumiu --filter label=role=job --filter status=running --filter status=restarting --filter ancestor=$(docker image ls --filter reference=luizkowalski/sumiu-app:latest --format '\''{{.ID}}'\'') ; docker ps --latest --format '\''{{.Names}}'\'' --filter label=service=sumiu --filter label=role=job --filter status=running --filter status=restarting' | head -1 | while read line; do echo ${line#sumiu-job-}; done on web
  INFO [ff0a00e4] Finished in 0.260 seconds with exit status 0 (successful).
Start container with version 47180c1aab1d0dd56f986f90126523bca18468a3 (or reboot if already running)...
  INFO [0c456bf6] Running /usr/bin/env mkdir -p .kamal/apps/sumiu/assets/extracted/web-47180c1aab1d0dd56f986f90126523bca18468a3 && docker stop -t 1 sumiu-web-assets 2> /dev/null || true && docker run --name sumiu-web-assets --detach --rm --entrypoint sleep luizkowalski/sumiu-app:47180c1aab1d0dd56f986f90126523bca18468a3 1000000 && docker cp -L sumiu-web-assets:/rails/public/assets/. .kamal/apps/sumiu/assets/extracted/web-47180c1aab1d0dd56f986f90126523bca18468a3 && docker stop -t 1 sumiu-web-assets on web
  INFO [0c456bf6] Finished in 12.956 seconds with exit status 0 (successful).
  INFO [8dbf5732] Running /usr/bin/env sh -c 'docker ps --latest --format '\''{{.Names}}'\'' --filter label=service=sumiu --filter label=role=web --filter status=running --filter status=restarting --filter ancestor=$(docker image ls --filter reference=luizkowalski/sumiu-app:latest --format '\''{{.ID}}'\'') ; docker ps --latest --format '\''{{.Names}}'\'' --filter label=service=sumiu --filter label=role=web --filter status=running --filter status=restarting' | head -1 | while read line; do echo ${line#sumiu-web-}; done on web
  INFO [8dbf5732] Finished in 0.185 seconds with exit status 0 (successful).
  INFO [569f2192] Running /usr/bin/env mkdir -p .kamal/apps/sumiu/assets/volumes/web-47180c1aab1d0dd56f986f90126523bca18468a3 ; cp -rnT .kamal/apps/sumiu/assets/extracted/web-47180c1aab1d0dd56f986f90126523bca18468a3 .kamal/apps/sumiu/assets/volumes/web-47180c1aab1d0dd56f986f90126523bca18468a3 ; cp -rnT .kamal/apps/sumiu/assets/extracted/web-47180c1aab1d0dd56f986f90126523bca18468a3 .kamal/apps/sumiu/assets/volumes/web-58692e90c3e42f6a41490b555092ad2c804dcac1_uncommitted_eb44aaa2018ef063 || true ; cp -rnT .kamal/apps/sumiu/assets/extracted/web-58692e90c3e42f6a41490b555092ad2c804dcac1_uncommitted_eb44aaa2018ef063 .kamal/apps/sumiu/assets/volumes/web-47180c1aab1d0dd56f986f90126523bca18468a3 || true on web
  INFO [569f2192] Finished in 0.143 seconds with exit status 0 (successful).
  INFO [7e05dc95] Running docker container ls --all --filter name=^sumiu-web-47180c1aab1d0dd56f986f90126523bca18468a3$ --quiet on web
  INFO [7e05dc95] Finished in 0.157 seconds with exit status 0 (successful).
  INFO [ffa6a90f] Running /usr/bin/env sh -c 'docker ps --latest --format '\''{{.Names}}'\'' --filter label=service=sumiu --filter label=role=web --filter status=running --filter status=restarting --filter ancestor=$(docker image ls --filter reference=luizkowalski/sumiu-app:latest --format '\''{{.ID}}'\'') ; docker ps --latest --format '\''{{.Names}}'\'' --filter label=service=sumiu --filter label=role=web --filter status=running --filter status=restarting' | head -1 | while read line; do echo ${line#sumiu-web-}; done on web
  INFO [ffa6a90f] Finished in 0.232 seconds with exit status 0 (successful).
  INFO [dc79208e] Running /usr/bin/env mkdir -p .kamal/apps/sumiu/env/roles on web
  INFO [dc79208e] Finished in 0.129 seconds with exit status 0 (successful).
  INFO Uploading .kamal/apps/sumiu/env/roles/web.env 100.0%
  INFO [2278229e] Running docker run --detach --restart unless-stopped --name sumiu-web-47180c1aab1d0dd56f986f90126523bca18468a3 --network kamal --hostname web-9145ff71092c -e KAMAL_CONTAINER_NAME="sumiu-web-47180c1aab1d0dd56f986f90126523bca18468a3" -e KAMAL_VERSION="47180c1aab1d0dd56f986f90126523bca18468a3" --env RAILS_MAX_THREADS="3" --env WEB_CONCURRENCY="2" --env RAILS_ENV="production" --env-file .kamal/apps/sumiu/env/roles/web.env --log-opt max-size="10m" --volume /data/thruster:/rails/storage/thruster --volume $(pwd)/.kamal/apps/sumiu/assets/volumes/web-47180c1aab1d0dd56f986f90126523bca18468a3:/rails/public/assets --label service="sumiu" --label role="web" --label destination --label traefik.enable="true" --label traefik.http.routers.sumiu.rule="Host(\`sumiu.link\`)" --label traefik.http.routers.sumiu.entrypoints="websecure" --label traefik.http.routers.sumiu.tls.certresolver="letsencrypt" --label traefik.http.routers.sumiu-metrics.rule="Host(\`sumiu.link\`) && PathPrefix(\`/metrics\`)" --label traefik.http.routers.sumiu-metrics.service="sumiu-web@docker" --label traefik.http.routers.sumiu-metrics.tls="true" --label traefik.http.routers.sumiu-metrics.middlewares="metrics-auth" --label traefik.http.middlewares.metrics-auth.basicauth.users="admin:\$apr1\$5QeDhE9N\$wfLTmPYcFmoHiFbmv.RC1/" luizkowalski/sumiu-app:47180c1aab1d0dd56f986f90126523bca18468a3 on web
  INFO [2278229e] Finished in 0.562 seconds with exit status 0 (successful).
  INFO [9c474a4b] Running docker container ls --all --filter name=^sumiu-web-47180c1aab1d0dd56f986f90126523bca18468a3$ --quiet on web
  INFO [9c474a4b] Finished in 0.176 seconds with exit status 0 (successful).
  INFO [2be89293] Running docker exec kamal-proxy kamal-proxy deploy sumiu-web --target "d1b769050226:80" --deploy-timeout "30s" --drain-timeout "30s" --buffer-requests --buffer-responses --log-request-header "Cache-Control" --log-request-header "Last-Modified" --log-request-header "User-Agent" on web
  INFO [2be89293] Finished in 11.345 seconds with exit status 0 (successful).
  INFO First web container is healthy on web, booting any other roles
  INFO [2a0ac0ca] Running docker container ls --all --filter name=^sumiu-web-58692e90c3e42f6a41490b555092ad2c804dcac1_uncommitted_eb44aaa2018ef063$ --quiet | xargs docker stop on web
  INFO [2a0ac0ca] Finished in 12.462 seconds with exit status 0 (successful).
  INFO [98639c67] Running /usr/bin/env find .kamal/apps/sumiu/assets/extracted -maxdepth 1 -name 'web-*' ! -name web-47180c1aab1d0dd56f986f90126523bca18468a3 -exec rm -rf "{}" + ; find .kamal/apps/sumiu/assets/volumes -maxdepth 1 -name 'web-*' ! -name web-47180c1aab1d0dd56f986f90126523bca18468a3 -exec rm -rf "{}" + on web
  INFO [98639c67] Finished in 0.092 seconds with exit status 0 (successful).
  INFO [523f1780] Running docker container ls --all --filter name=^sumiu-job-47180c1aab1d0dd56f986f90126523bca18468a3$ --quiet on web
  INFO [523f1780] Finished in 0.171 seconds with exit status 0 (successful).
  INFO [606f64c3] Running /usr/bin/env sh -c 'docker ps --latest --format '\''{{.Names}}'\'' --filter label=service=sumiu --filter label=role=job --filter status=running --filter status=restarting --filter ancestor=$(docker image ls --filter reference=luizkowalski/sumiu-app:latest --format '\''{{.ID}}'\'') ; docker ps --latest --format '\''{{.Names}}'\'' --filter label=service=sumiu --filter label=role=job --filter status=running --filter status=restarting' | head -1 | while read line; do echo ${line#sumiu-job-}; done on web
  INFO [606f64c3] Finished in 0.265 seconds with exit status 0 (successful).
  INFO Waiting for the first healthy web container before booting job on web...
  INFO First web container is healthy, booting job on web...
  INFO [720688f2] Running /usr/bin/env mkdir -p .kamal/apps/sumiu/env/roles on web
  INFO [720688f2] Finished in 0.124 seconds with exit status 0 (successful).
  INFO Uploading .kamal/apps/sumiu/env/roles/job.env 100.0%
  INFO [d8706509] Running docker run --detach --restart unless-stopped --name sumiu-job-47180c1aab1d0dd56f986f90126523bca18468a3 --network kamal --hostname web-661c3a4e6625 -e KAMAL_CONTAINER_NAME="sumiu-job-47180c1aab1d0dd56f986f90126523bca18468a3" -e KAMAL_VERSION="47180c1aab1d0dd56f986f90126523bca18468a3" --env RAILS_MAX_THREADS="3" --env WEB_CONCURRENCY="2" --env RAILS_ENV="production" --env CRON_ENABLED="1" --env GOOD_JOB_MAX_THREADS="10" --env DB_POOL="13" --env-file .kamal/apps/sumiu/env/roles/job.env --log-opt max-size="10m" --volume /data/thruster:/rails/storage/thruster --label service="sumiu" --label role="job" --label destination --label traefik.enable luizkowalski/sumiu-app:47180c1aab1d0dd56f986f90126523bca18468a3 bin/good_job --probe-port=7001 on web
  INFO [d8706509] Finished in 0.579 seconds with exit status 0 (successful).
  INFO [76bdac33] Running docker container ls --all --filter name=^sumiu-job-47180c1aab1d0dd56f986f90126523bca18468a3$ --quiet | xargs docker inspect --format '{{if .State.Health}}{{.State.Health.Status}}{{else}}{{.State.Status}}{{end}}' on web
  INFO [76bdac33] Finished in 0.194 seconds with exit status 0 (successful).
  INFO Container is running, waiting for readiness delay of 7 seconds
  INFO [69a72a88] Running docker container ls --all --filter name=^sumiu-job-47180c1aab1d0dd56f986f90126523bca18468a3$ --quiet | xargs docker inspect --format '{{if .State.Health}}{{.State.Health.Status}}{{else}}{{.State.Status}}{{end}}' on web
  INFO [69a72a88] Finished in 0.162 seconds with exit status 0 (successful).
  INFO Container is healthy!
  INFO [81bf21bf] Running docker container ls --all --filter name=^sumiu-job-58692e90c3e42f6a41490b555092ad2c804dcac1_uncommitted_eb44aaa2018ef063$ --quiet | xargs docker stop -t 30 on web
  INFO [81bf21bf] Finished in 12.437 seconds with exit status 0 (successful).
  INFO [6ccf3e48] Running docker tag luizkowalski/sumiu-app:47180c1aab1d0dd56f986f90126523bca18468a3 luizkowalski/sumiu-app:latest on web
  INFO [6ccf3e48] Finished in 0.159 seconds with exit status 0 (successful).
Prune old containers and images...
  INFO [f05dedfb] Running docker ps -q -a --filter label=service=sumiu --filter status=created --filter status=exited --filter status=dead | tail -n +6 | while read container_id; do docker rm $container_id; done on web
  INFO [f05dedfb] Finished in 0.192 seconds with exit status 0 (successful).
  INFO [2daf8d83] Running docker image prune --force --filter label=service=sumiu on web
  INFO [2daf8d83] Finished in 0.168 seconds with exit status 0 (successful).
  INFO [5a491889] Running docker image ls --filter label=service=sumiu --format '{{.ID}} {{.Repository}}:{{.Tag}}' | grep -v -w "$(docker container ls -a --format '{{.Image}}\|' --filter label=service=sumiu | tr -d '\n')luizkowalski/sumiu-app:latest\|luizkowalski/sumiu-app:<none>" | while read image tag; do docker rmi $tag; done on web
  INFO [5a491889] Finished in 0.174 seconds with exit status 0 (successful).
Releasing the deploy lock...
  Finished all in 420.0 seconds
Running the post-deploy hook...
  INFO [56748980] Running /usr/bin/env .kamal/hooks/post-deploy as luiz@localhost
  INFO [56748980] Finished in 1.004 seconds with exit status 0 (successful).

so it kinda works

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants