Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k8s, code/hiera deployment isn't working because of ReadWriteOnce volume #204

Open
baurmatt opened this issue Feb 18, 2020 · 3 comments
Open
Assignees
Labels
helm chart invalid This doesn't seem right

Comments

@baurmatt
Copy link

Describe the Bug

The k8s Helm Chart uses a PersistentVolume (puppet-code-claim) for the Puppet code (/etc/puppetlabs/code/) storage. This gets initialized by a PersistentVolumeClaim as ReadWriteOnce.

The PVC (puppet-code-claim) then gets used by the Puppetserver Deployment and both r10k Cronjobs (code/hiera). Due to the fact that the volume being a ReadWriteOnce, this doesn't work.

This effectively leads to broken setup as the code will never be deployed.

Expected Behavior

Quick and dirty: s,ReadWriteOnce,ReadWriteMany,g

But because ReadWriteMany isn't support by a many cloud providers, I would prefer a solution which doesn't depend on the cloud provider offering a ReadWriteMany.

My current approach would be a webhook based version instead of cronjobs:

  • Use emptyDir volumes instead of a PVC for code/hiera
  • Every Puppetserver pod gets a sidecar which runs a webhook which can trigger a local r10k redeployment of code/hiera
  • Create a central webhook receiver pod which can be used to push Github/Gitlab/... webhook to it. This receiver should forward the redeploy request to all currently running Puppetserver pods.

Steps to Reproduce

Deploy the helm chart on a multi node cluster (not multi az/dc! ;))

Environment

Additional Context

Add any other context about the problem here.

@baurmatt baurmatt added the bug Something isn't working label Feb 18, 2020
@baurmatt
Copy link
Author

Another alternative would be to run the cronjob in the sidecar.

@baurmatt
Copy link
Author

Moaaar options :D https://github.com/kubernetes/git-sync

sistason pushed a commit to syseleven/pupperware that referenced this issue Feb 20, 2020
sistason pushed a commit to syseleven/pupperware that referenced this issue Feb 20, 2020
Fixes puppetlabs#204 by migrating the r10k cronjobs to sidecars
The sidecar is using crond, while still respecting all values.yaml options from the cronjob
sistason pushed a commit to syseleven/pupperware that referenced this issue Feb 20, 2020
Fixes puppetlabs#204 by migrating the r10k cronjobs to sidecars
The sidecar is using crond, while still respecting all values.yaml options from the cronjob
sistason pushed a commit to syseleven/pupperware that referenced this issue Feb 20, 2020
Fixes puppetlabs#204 by migrating the r10k cronjobs to sidecars
The sidecar is using crond, while still respecting all values.yaml options from the cronjob
@Xtigyro Xtigyro self-assigned this Mar 31, 2020
@Xtigyro
Copy link
Collaborator

Xtigyro commented Mar 31, 2020

@baurmatt Hey - thanks for reporting it.

Right now we don't support running multiple Puppet masters on different nodes which sit in different cloud zones but we have plans and work has started on that.

In the README you can find information on how to setup properly Puppet Server in the cloud using a K8s Storage Class here, how to set pod affinity for r10k here or setting common node selectors for Puppet Server and r10k here.

Have you tested those options?

@Xtigyro Xtigyro added helm chart invalid This doesn't seem right and removed bug Something isn't working labels Apr 4, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
helm chart invalid This doesn't seem right
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants