Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[kube-prometheus-stack] Increasing the storage in values.yaml does not increase PVC #3746

Open
chichi13 opened this issue Sep 1, 2023 · 2 comments
Labels
bug Something isn't working

Comments

@chichi13
Copy link

chichi13 commented Sep 1, 2023

Describe the bug a clear and concise description of what the bug is.

I have a GKE cluster (v1.26.5), I wanted to increase the prometheus storage from 120Gi to 140Gi, so I updated the values.yaml:

kube-prometheus-stack:
  prometheus:
    prometheusSpec:
      storageSpec:
        volumeClaimTemplate:
          spec:
            storageClassName: balanced-rwo-retain
            accessModes: ["ReadWriteOnce"]
            resources:
              requests:
                storage: 140Gi # Changed

Here is the output of the helm diff after this change:

❯ helm diff upgrade --install monitoring . -n monitoring -f values-ng-monitoring.yaml
monitoring, k8s, Prometheus (monitoring.coreos.com) has changed:
[...]
    storage:
      volumeClaimTemplate:
        spec:
          accessModes:
          - ReadWriteOnce
          resources:
            requests:
-             storage: 120Gi
+             storage: 140Gi
          storageClassName: balanced-rwo-retain
[...]

After the help update, here is the kubectl get pvc:

❯ kgpvc
NAME                                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
prometheus-k8s-db-prometheus-k8s-0   Bound    pvc-e08ea343-4285-4d8a-9529-85c7ddf909cc   120Gi      RWO            balanced-rwo-retain   126d
prometheus-k8s-db-prometheus-k8s-1   Bound    pvc-0bbb0766-fb2e-48e6-9663-10d532cd256f   120Gi      RWO            balanced-rwo-retain   126d
prometheus-k8s-db-prometheus-k8s-2   Bound    pvc-456db3d2-d4bd-4ce1-9127-21ba9ef01a10   120Gi      RWO            balanced-rwo-retain   126d

The PVC is still 120Gi and if I exec in one pod:

❯ k exec -it prometheus-k8s-2 -- sh
/prometheus $ df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                  54.9G      4.7G     50.2G   9% /
tmpfs                    64.0M         0     64.0M   0% /dev
tmpfs                     3.9G         0      3.9G   0% /sys/fs/cgroup
/dev/sdb                117.9G     81.3G     36.7G  69% /prometheus
[...]

The size of the disk is still 120Gi but the pod well restarted after the upgrade...

Did I miss something ? Or is this a non expected behaviour ?

What's your helm version?

v3.12.1

What's your kubectl version?

v1.27.3

Which chart?

kube-prometheus-stack

What's the chart version?

48.3.1

What happened?

No response

What you expected to happen?

No response

How to reproduce it?

No response

Enter the changed values of values.yaml?

No response

Enter the command that you execute and failing/misfunctioning.

helm upgrade --install monitoring . -n monitoring -f values-ng-monitoring.yaml not increasing the Prometheus disk size.

Anything else we need to know?

No response

@chichi13 chichi13 added the bug Something isn't working label Sep 1, 2023
@zeritti
Copy link
Member

zeritti commented Sep 1, 2023

Prometheus operator does not patch the corresponding PVC object which results in the behaviour you observe, i.e. no change in capacity (in fact, neither the operator nor helm actually creates the PVC). The idea is that this should be done by the statefulset controller (see kubernetes/enhancements#661).

At the moment, the PVC itself has to be patched manually as described in Prometheus operator's resizing volumes.

@bchkhaidze
Copy link

Here is an updated address of a storage resizing manual - resizing volumes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants