-
Notifications
You must be signed in to change notification settings - Fork 885
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs(proposal): Migration Rollback Protection #5101
docs(proposal): Migration Rollback Protection #5101
Conversation
7d688b9
to
09ad847
Compare
Codecov ReportAll modified and coverable lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## master #5101 +/- ##
==========================================
- Coverage 34.87% 34.85% -0.03%
==========================================
Files 645 645
Lines 44833 44861 +28
==========================================
- Hits 15637 15635 -2
- Misses 27991 28021 +30
Partials 1205 1205
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot @CharlesQQ
Hi, guys @RainbowMango @chaunceyjiang @whitewindmills @zhy76 @buji-code @lcw2
We can discuss here to improve the resource deletion policy.
/assign |
0f2fb08
to
2653e9d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this annotation have a knock-on effect on dependent resources?―that is, one adds such an annotation to his Deployment, will the annotation affect its dependent resources(like ConfigMap
)?
2653e9d
to
09c6d78
Compare
09c6d78
to
281dbbf
Compare
b9171d6
to
4ad9a83
Compare
5315523
to
fe04cdf
Compare
You can think through the expansion of solution 1. Solution 4 not only extends work. For the current proposal, it is a combination of two solutions, which I think is a bit redundant. |
Extended by Annotation or Extend the fields of Work Choose one, I'm more tend to solution 1. |
got it, solution 1(+1) |
|
||
Solution One: | ||
Disadvantages: | ||
- When the `execution-controller` determines whether to cascade delete resources in the member clusters, it needs to parse the resources in the Work template from the manifest, which adds an extra parsing overhead. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now, during the process of deleting the workload, the execution-controller
unmarshals the manifest already, does this approach adds extra overhead?
karmada/pkg/controllers/execution/execution_controller.go
Lines 165 to 182 in b27e669
func (c *Controller) tryDeleteWorkload(ctx context.Context, clusterName string, work *workv1alpha1.Work) error { | |
for _, manifest := range work.Spec.Workload.Manifests { | |
workload := &unstructured.Unstructured{} | |
err := workload.UnmarshalJSON(manifest.Raw) | |
if err != nil { | |
klog.Errorf("Failed to unmarshal workload, error is: %v", err) | |
return err | |
} | |
err = c.ObjectWatcher.Delete(ctx, clusterName, workload) | |
if err != nil { | |
klog.Errorf("Failed to delete resource in the given member cluster %v, err is %v", clusterName, err) | |
return err | |
} | |
} | |
return nil | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. For this approach, no extra parsing overhead is required.
7605dc3
to
914badb
Compare
914badb
to
5ba79d2
Compare
--- | ||
title: Use Cascading Deletion in Karmada | ||
|
||
authors: | ||
- "@CharlesQQ" | ||
|
||
reviewers: | ||
- "@robot" | ||
- TBD | ||
|
||
approvers: | ||
- "@robot" | ||
- TBD | ||
|
||
creation-date: 2024-07-01 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
--- | |
title: Use Cascading Deletion in Karmada | |
authors: | |
- "@CharlesQQ" | |
reviewers: | |
- "@robot" | |
- TBD | |
approvers: | |
- "@robot" | |
- TBD | |
creation-date: 2024-07-01 | |
--- | |
title: Use Cascading Deletion in Karmada | |
authors: | |
- "@CharlesQQ" | |
reviewers: | |
- "@robot" | |
- TBD | |
approvers: | |
- "@robot" | |
- TBD | |
creation-date: 2024-07-01 | |
--- |
Please update the reviewers by adding the guys who reviewed this proposal.
7e4e4ea
to
84038de
Compare
@@ -0,0 +1,388 @@ | |||
--- | |||
title: Use Cascading Deletion in Karmada |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about naming this feature as Migration Rollback Protection
?
Forget cascading
to avoid misleading.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
c2067b9
to
a6086f2
Compare
cc @RainbowMango |
<!-- | ||
提供一种联邦资源的删除策略,可供用户选择当删除联邦层工作负载时,是否同步删除成员集群中的工作负载。 | ||
此设置在工作负载迁移场景中特别有用,可以确保在不影响成员集群上运行的工作负载的情况下快速执行回滚。 | ||
--> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove all Chinese characters.
Karmada 系统的当前行为是这样的,当用户删除 Karmada 控制面的资源时,会同步删除成员集群中被分发的资源。当在某些场景下,例如工作负载迁移回滚场景,用户希望能够保留成员集群中的工作负载。 | ||
--> | ||
|
||
The current behavior of the Karmada system is that when a user deletes resources from the Karmada control plane, the distributed resources in the member clusters are also deleted synchronously. However, in certain scenarios, such as workload migration rollout scenarios, users may wish to retain the workloads in the member clusters. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current behavior of the Karmada system is that when a user deletes resources from the Karmada control plane, the distributed resources in the member clusters are also deleted synchronously. However, in certain scenarios, such as workload migration rollout scenarios, users may wish to retain the workloads in the member clusters. | |
The current behavior of the Karmada system is that when a user deletes resources from the Karmada control plane, the distributed resources in the member clusters are also deleted synchronously. However, in certain scenarios, such as workload migration scenarios, users may wish to retain the workloads in the member clusters. |
Signed-off-by: chang.qiangqiang <chang.qiangqiang@immomo.com>
a6086f2
to
fb99665
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: RainbowMango The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
What this PR does / why we need it:
Which issue(s) this PR fixes:
part of #4709
Special notes for your reviewer:
Does this PR introduce a user-facing change?: