-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Velero pod gets evicted because the node is running low on ephemeral-storage #7718
Comments
I have opened another issue for Kopia path #7725. However, we probably would not add this enhancement for Restic path since it will be deprecated soon. |
Can you share with me what will be replacing it. I don't mind waiting a bit for a solution, but if there are no plans to resolve this bug in the long-term, our company might have to look at another solution to resolve this bug. |
The Restic path for fs-backup will be deprecated/removed, only Kopia path is kept. So in order to wait for a solution, you need to switch to Kopia path. |
Thanks for the insight. In that case I guess we'll have to start moving to Kopia Path. |
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 14 days. If a Velero team member has requested log or more information, please provide the output of the shared commands. |
This issue was closed because it has been stalled for 14 days with no activity. |
What steps did you take and what happened:
I'm facing a problem where the Velero pod gets evicted, because the node it was running on ran low on ephemeral-storage.
When I describe the pod:
What did you expect to happen:
I would expect that the pod keeps running.
Anything else you would like to add:
I'm using version 6.0.0 of the Velero helm-chart.
I've done some investigation into this issue myself and found that the Restic cache that is stored in the scratch directory is causing the problem.
When I describe my backuprepository, I see the following error message:
We are running Velero for a lot of our customers, but we only see this behavior on two of our customer environments. It's also good to mention that the back-ups are still succeeding, but it's still undesired behavior.
I've also tried running the
restic prune
command manually and noticed a very rapid increase in cache-size, resulting in the pod getting evicted again.Before starting the prune command the cache size was 132K, the pod got evicted when the cache was between 20G and 25G.
I would have liked to use a PVC, instead of an emptyDir for the scratch volume, as mentioned in #2087, but the helm-chart doesn't allow for that. I also tried pruning without using the cache, but as stated when running the command, it's very slow.
When going through the Github issues for this project, I've found a few related issues:
#7177
#2087
Environment:
Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
The text was updated successfully, but these errors were encountered: