-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Velero integration in the test environment #903
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just needs a few minor changes to Ramen config map
VeleroNamespaceSecretKeyRef: | ||
key: cloud | ||
name: cloud-credentials | ||
veleroNamespaceName: velero |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be under kubeObjectProtection:
, or it could be omitted as velero
is the default value
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, The changes were copied from
veleroNamespaceName: "velero" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tjanssen3 I wonder if the config map referenced above is used by recipe_e2e
? It seems a few of the field identifiers are wrong. Since this is not a CRD the usual API server checking is missing...maybe the fields are simply ignored, which would be fine for the Velero namespace name which defaults to velero
, but the secret name and key do not have defaults.
cdd01bf
to
681c3fb
Compare
3d5e662
to
f61122d
Compare
f5853a9
to
0ce8c1c
Compare
For testing kube object projection in a setup without a hub. Work in progress: - Need to update `ramenctl deploy` to skip the missing hub - Need to update `ramenctl config` to configure the dr clusters instead of the hub Based temporarily on RamenDR#903 Part-of: RamenDR#902 Signed-off-by: Nir Soffer <nsoffer@redhat.com>
For testing kube object projection in a setup without a hub. Work in progress: - Need to update `ramenctl deploy` to skip the missing hub - Need to update `ramenctl config` to configure the dr clusters instead of the hub Based temporarily on RamenDR#903 Part-of: RamenDR#902 Signed-off-by: Nir Soffer <nsoffer@redhat.com>
For testing kube object projection in a setup without a hub. Based temporarily on RamenDR#903 Part-of: RamenDR#902 Signed-off-by: Nir Soffer <nsoffer@redhat.com>
For testing kube object projection in a setup without a hub. Based temporarily on RamenDR#903 Part-of: RamenDR#902 Signed-off-by: Nir Soffer <nsoffer@redhat.com>
For testing kube object projection in a setup without a hub. Based temporarily on RamenDR#903 Part-of: RamenDR#902 Signed-off-by: Nir Soffer <nsoffer@redhat.com>
For testing kube object projection in a setup without a hub. Based temporarily on RamenDR#903 Part-of: RamenDR#902 Signed-off-by: Nir Soffer <nsoffer@redhat.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
For testing kube object projection in a setup without a hub. Based temporarily on RamenDR#903 Part-of: RamenDR#902 Signed-off-by: Nir Soffer <nsoffer@redhat.com>
For testing kube object projection in a setup without a hub. Based temporarily on RamenDR#903 Part-of: RamenDR#902 Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Changes since @hatfieldbrian review:
|
For testing kube object projection in a setup without a hub. Based temporarily on RamenDR#903 Part-of: RamenDR#902 Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Rebased on main to fix conflicts with merged submariner pr. |
For testing kube object projection in a setup without a hub. Based temporarily on RamenDR#903 Part-of: RamenDR#902 Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Promote ramenctl.minio helper module to drenv.minio so we can use it in the velero addon. This is module is not really generic and depend on the specific way we deploy the minio addon, but we don't have a better place for this now. Signed-off-by: Nir Soffer <nsoffer@redhat.com>
To install and debug *Velero* we need the `velero` tool. Add it to the required tools list like all other tools. I think we can simplify this later by installing the tools automatically, but we can improve this later. Signed-off-by: Nir Soffer <nsoffer@redhat.com>
For testing velero it is much faster to use a small and simpler environment. Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Use the velero command (new requirement) to install velero on the managed clusters. Installing velero also installs velero crds, so previous code to install them was removed. Signed-off-by: Nir Soffer <nsoffer@redhat.com>
The test deploy nginx, creates a backup, delete the namespace and store nginx from the backup. Finally it cleans up so we don't leave anything on the cluster. The test is idempotent - if the tests fails and leave junk around, the next run will delete backups and restores and start from scratch. Issues: - Restore completes successfully - but has a warning. The deployment looks correct and the warning looks like expected behavior when backing up all resources blindly. $ velero restore get NAME BACKUP STATUS STARTED COMPLETED ERRORS WARNINGS CREATED SELECTOR nginx-backup-20230602013503 nginx-backup Completed 2023-06-02 01:35:03 +0300 IDT 2023-06-02 01:35:04 +0300 IDT 0 1 2023-06-02 01:35:03 +0300 IDT <none> $ velero restore describe | grep Warn Warnings: nginx-example: could not restore, Endpoints "my-nginx" already exists. Warning: the in-cluster version is different than the backed-up version. $ kubectl get all -n nginx-example NAME READY STATUS RESTARTS AGE pod/nginx-deployment-7c89967545-zgb7h 1/1 Running 0 41s pod/nginx-deployment-7c89967545-zrf2s 1/1 Running 0 41s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/my-nginx LoadBalancer 10.103.206.98 <pending> 80:32334/TCP 41s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nginx-deployment 2/2 2 2 41s NAME DESIRED CURRENT READY AGE replicaset.apps/nginx-deployment-7c89967545 2 2 2 41s Signed-off-by: Nir Soffer <nsoffer@redhat.com>
We need to add a velero s3 secret for the velero bucket. Rename the ramen s3 secret to so we can have consistent and clear names. Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Velero uses the same s3 bucket using the same s3 profile, but it needs a different secret stored at `veleroNamespaceSecretKeyRef`. The secret is created by `velero install`. Seems to be complete, but not tested with ramen yet. Signed-off-by: Nir Soffer <nsoffer@redhat.com>
For testing kube object projection in a setup without a hub. Based temporarily on RamenDR#903 Part-of: RamenDR#902 Signed-off-by: Nir Soffer <nsoffer@redhat.com>
For testing kube object projection in a setup without a hub. Based temporarily on RamenDR#903 Part-of: RamenDR#902 Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Integrate velero in the test environment, installing velero on the managed clusters.
The velero addon has now a self test verifying that velero backup and restore work
on a single cluster
The self test uses nginx deployment copied from velero release tarball with a fix for
vmware-tanzu/velero#6347.
ramenctl config
was updated to configure ramen velero options, mainly the specialcloud secret generated by the
velero install
run by the velero addon.The velero tool is new requirement that must be installed manually like other tools. We
may want to improve this later.
Ramen kube object protection was not tested yet with this change.
Resolves #662