-
Notifications
You must be signed in to change notification settings - Fork 887
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
docs(proposal): add docs for deletion policy
- Loading branch information
Showing
2 changed files
with
61 additions
and
0 deletions.
There are no files selected for viewing
61 changes: 61 additions & 0 deletions
61
docs/proposals/prevent-removal-managed-resources/README.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,61 @@ | ||
--- | ||
title: Pause ensure work | ||
authors: | ||
- "@CharlesQQ" | ||
reviewers: | ||
- "@RainbowMango" | ||
- "@XiShanYongYe-Chang" | ||
- "@chaunceyjiang" | ||
approvers: | ||
|
||
creation-date: 2024-06-11 | ||
|
||
# prevent removal managed resource | ||
|
||
## Summary | ||
|
||
A deletion mechanism and its API is provided to prevent cascade resource in member cluster deleted by control plane. | ||
|
||
## Motivation | ||
|
||
All resources reconciled by the execution controller have a finalizer `karmada.io/execution-controller` added to work object's metedata. | ||
This finalizer will prevent deletion of work object until the execution controller has a chance to perform pre-deletion cleanup. | ||
|
||
Pre-deletion cleanup of work object includes removal of resources managed by the karmada from member clusters. | ||
|
||
In karmada, complete removal strategy should prevent removal of these managed resources, like `kubectl delete sts xxx --cascade=orphan`, sts is deleted but pods not. | ||
However, different resource have different prevent removal policy | ||
|
||
1. resource template: cleanup the labels/annotations from the resource on member clusters、delete work object. | ||
2. dependencies resource: cleanup the labels/annotations from the dependencies resource on member clusters, such as configmap、secret、delete work object. | ||
3. directly generate work object such as namespace/cronfederatedhpa/federaredhpa/federatedresourcequota execute the same strategy as resource template. | ||
|
||
### Goals | ||
- Provides API to prevent delete cascade resource. | ||
- Provides command line to use feature like `karmadactl orphaningdeletion complete/disable <resource type> <name>` | ||
|
||
### Non-Goals/Future Work | ||
- Other delete policy, like retain work object. | ||
|
||
## Proposal | ||
### User stories | ||
|
||
#### Story 1 | ||
Usually, member clusters are built and pods created already, then join to karmada; For now, use create resource and pp in control plane, and resource in member cluster | ||
is updated by execution controller; But find pods replicas topology or pods are not expected, The usual approach is roll back to previous state. So a way is needed where | ||
member cluster resources and karmada control planes are completely independent. | ||
|
||
|
||
## Design Details | ||
|
||
### API changes | ||
- Add Annotation `karmada.io/deletionpolicy=complete` for resourcetemplate or other resource which create work object directly. | ||
- The value of `karmada.io/deletionpolicy` should be enum type for scalability | ||
|
||
### Dependencies resource deletion policy follow the main resource | ||
![Dependencies resource deletion policy](statics/dependencies-resource-deletion-policy.png) | ||
|
||
### Add karmadactl orphaning-deletion flag | ||
- `karmadactl deletionpolicy complete <resource type> <name>` add annotation `karmada.io/deletionpolicy=complete` | ||
- `karmadactl deletionpolicy disable <resource type> <name>` remove annotation `karmada.io/deletionpolicy` | ||
|
Binary file added
BIN
+113 KB
...ent-removal-managed-resources/statics/dependencies-resource-deletion-policy.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.