-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
With DataMover restored PV is Empty #7189
Comments
Is the volume you are checking for this PVC |
I cannot see a mount Directory on the Node the Pod runs on, so it does not look like it. |
@hofq Please do the check after the restore and when you see the problem. Don't remove the restored pod. |
The Pod was not removed, but I will re-trigger the restore. |
If the pod was not removed, you should be able to see |
This is the Tree directly after the restore |
What files are in the volume? Did you create them manually or are they owned by any application? |
I see you were backing up the default namespace, can you move the workload to a dedicate namespace and test the backup/restore. |
Okay, so i will backup the namespace prod-backup-test using my schedule "prod-30d-eu-central-1", which backs up the Namespace in question and a few others. After that i will restore the Namespace prod-backup-test into prod-backup-test2. prod-backup-test only carries the pod with the volume and pvc. prod-backup-test2 is non-existant before the restore. here are the commands i've ran: velero create backup --from-schedule prod-30d-eu-central-1
velero create restore --from-backup prod-30d-eu-central-1-20231207135032 --include-namespaces prod-backup-test --namespace-mappings prod-backup-test:prod-backup-test2 After Checking the Volume it was empty again. The Schedule in Question is this one: apiVersion: velero.io/v1
kind: Schedule
metadata:
annotations:
labels:
app: prod-backup-schedules
app.kubernetes.io/instance: prod-backup-schedules
type: backup
name: prod-30d-eu-central-1
namespace: velero
spec:
paused: false
schedule: 0 3 * * *
template:
datamover: velero
includedNamespaces:
- kube-system
- monitoring
- grafana
- logging
- prod-backup-test
snapshotMoveData: true
storageLocation: default
ttl: 730h
volumeSnapshotLocations:
- csi-gp3
useOwnerReferencesInBackup: false
~ |
@hofq Not sure this work to you ---- please find me in the velero-user slack channel, we can have a live session to troubleshoot the problem. |
@hofq |
on the main Tag i have the issue that the restore won't even do anything:
The Process of Restoration just runs forever without anything happening in the cluster. Even the Namespace was not created. |
Did you see the restore CR created and being processed? |
It is there, but no DataDownload was initiated |
Please help to collect Velero log bundle, will further troubleshoot. |
Probably, I need to make the fix into 1.12 branch. The CRDs have changed in 1.13 (main image) so you client doesn't match them. |
Thank you |
@hofq Note: if you have used |
I can confirm successful restore! Thanks for your Help |
This problem is similar to #7027, which will happen in all EKS envs using AWS IAM roles for service accounts. |
What steps did you take and what happened:
schedule.yaml
velero create restore --from-backup <name of backup> --include-namespaces <namespace name>
What did you expect to happen:
The following information will help us better understand what's going on:
If you are using velero v1.7.0+:
Please use
velero debug --backup <backupname> --restore <restorename>
to generate the support bundle, and attach to this issue, more options please refer tovelero debug --help
bundle-2023-12-07-18-18-49.tar.gz
Anything else you would like to add:
Environment:
velero version
): v1.12.2 (also Tried v1.12.1 and 1.12.2-rc2)velero client config get features
):kubectl version
): Client: v1.28.4/etc/os-release
): Amazon Linux 2Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
The text was updated successfully, but these errors were encountered: