Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEP-3335: StatefulSet Start Ordinal #3336

Merged
merged 1 commit into from
Oct 6, 2022

Conversation

pwschuurman
Copy link
Contributor

@pwschuurman pwschuurman commented Jun 3, 2022

  • One-line PR description: Adding new KEP to support StatefulSet start ordinal
  • Other comments:

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jun 3, 2022
@k8s-ci-robot
Copy link
Contributor

Welcome @pwschuurman!

It looks like this is your first PR to kubernetes/enhancements 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/enhancements has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jun 3, 2022
@k8s-ci-robot
Copy link
Contributor

Hi @pwschuurman. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. label Jun 3, 2022
@k8s-ci-robot k8s-ci-robot requested a review from kow3ns June 3, 2022 06:14
@k8s-ci-robot k8s-ci-robot added the kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory label Jun 3, 2022
@k8s-ci-robot k8s-ci-robot requested a review from soltysh June 3, 2022 06:14
@k8s-ci-robot k8s-ci-robot added the sig/apps Categorizes an issue or PR as relevant to SIG Apps. label Jun 3, 2022

Migrating a StatefulSet in slices allows for gradual migration of the application, as liveness is maintained. Consider the scenario of transferring pod ordinal ownership from an old StatefulSet with `N` pods to a new StatefulSet with `0` pods. Further, to maintain application availability, no more than `d` pods should be unavailable at any time during the transfer.

StatefulSets are implicitly numbered starting at ordinal `0`. When pods are being deployed, they are created [sequentially in order](StatefulSet) from pod `0` to pod `N-1`. When pods are being deleted, they are terminated in reverse order from pod `N-1` to pod `0`. This behavior limits the migration scenario where an application operator wants to scale down pods in the old StatefulSet and scale up pods in the new StatefulSet. If pod `N-1` is removed from the old StatefulSet, there is no mechanism to create only pod `N-1` in a new StatefulSet without creating pods `[0, N-2]` as well. To do so would lead to the presence of duplicate pod ordinals (eg: pod `0` would exist in both StatefulSets).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder what is preventing app admin from migrating pod i to become pod N-i in the new cluster? Does the order matter to the user?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand either. Maybe the DNS domain can be the same, which is $(pod-name).$(service-name).$(namespace).svc.cluster.local, but that should be meaningless since it is in a different cluster.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Depending on the application and configuration, the ordinal that is assigned to a pod can be significant (eg: if the pod name is passed down using a download API to the underlying application, or the application uses the hostname to extract the ordinal). This is one reason to preserve the ordinal "i" when migrating a pod from the old to the new cluster.

Additionally if the StatefulSet is using a volumeClaimTemplate, the PV/PVC will be linked to the pod through ordinal "i". If underlying data is to be transferred or referenced in a new cluster, this adds additional complexity for the app admin. Consider migrating a pod with ordinal "m" remapped to pod "n" in a new cluster: the PV/PVC name in the new cluster would need to be updated to reference ordinal "n".

@SOF3
Copy link

SOF3 commented Aug 5, 2022

Another possible user story: I want to use kubefed to schedule a StatefulSet to multiple clusters, but doing so would result in a StatefulSet of [0, n) to be dispatched as e.g. [0, m) on member cluster 1 and [0, n - m) on member cluster 2. The ability to set the start ordinal as m on member cluster 2 allows to correctly dispatch a StatefulSet of [m, n) to multiple member clusters.

@pwschuurman
Copy link
Contributor Author

Another possible user story: I want to use kubefed to schedule a StatefulSet to multiple clusters, but doing so would result in a StatefulSet of [0, n) to be dispatched as e.g. [0, m) on member cluster 1 and [0, n - m) on member cluster 2. The ability to set the start ordinal as m on member cluster 2 allows to correctly dispatch a StatefulSet of [m, n) to multiple member clusters.

Thanks @SOF3 for the use case. What is the benefit of having unique ordinals per clusters when using KubeFed? Does this feature help with fungibility of replicas across clusters, if KubeFed were to leverage this feature? (eg: ordinal k could be moved from member cluster 1 to member cluster 2 without needing to change underlying PVC name of the replica).

@SOF3
Copy link

SOF3 commented Aug 6, 2022

Thanks @SOF3 for the use case. What is the benefit of having unique ordinals per clusters when using KubeFed? Does this feature help with fungibility of replicas across clusters, if KubeFed were to leverage this feature? (eg: ordinal k could be moved from member cluster 1 to member cluster 2 without needing to change underlying PVC name of the replica).

@pwschuurman Yes, it allows a special scheduler that can assign a different starting ordinal for each member cluster.

Currently a Deployment of 20 replicas may be distributed as

- cluster: a
  overrides:
    - {path: "/spec/replicas", value: 5}
- cluster: b
  overrides:
    - {path: "/spec/replicas", value: 15}

With the proposed changes, a StatefulSet may be distributed as

- cluster: a
  overrides:
    - {path: "/spec/replicas", value: 5}
    - {path: "/spec/replicaStartOrdinal", value: 0}
- cluster: b
  overrides:
    - {path: "/spec/replicas", value: 15}
    - {path: "/spec/replicaStartOrdinal", value: 5}

Then this would also generate a total of 20 pods with ordinals [0, 20) across all clusters.

@janetkuo
Copy link
Member

@soltysh @kow3ns

@SOF3
Copy link

SOF3 commented Sep 16, 2022

Is there the future compatibility for disjoint slices with the same name?

The use case is like this: Initially, we schedule foo to cluster A with 5 replicas and cluster B with 8 replicas, so cluster A has [0, 5) and cluster B has [5, 13). Later on, cluster B is running out of spare resources (still sufficient for the 8 replicas), but cluster A has got more resources. When we want to scale up foo from 13 replicas to 16 replicas, we want to have cluster A with replicas [0, 5) U [13, 16) and cluster B with replicas [5, 13). It is not possible to create two StatefulSets because we cannot have two StatefulSets with the same name, such that foo-4 and foo-13 get created but foo-8 doesn't.

A single replicaStartIndex would be confusing if this feature is ever added in the future.

Comment on lines 266 to 267
replicas: 3 replicas: 2
replicaStartOrdinal: 0 replicaStartOrdinal: 3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the idea behind slice orchestration is that the application operator will continuously update the values of replicas and replicasStartOrdinal until all the replicas are migrated over to the app-team namespace? It can also decide whether to continue or halt the rest of the migration based on e.g., new replicas' health etc.?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's correct. It provides the application operator granularity and control when moving replicas over.

@k8s-ci-robot k8s-ci-robot added the sig/multicluster Categorizes an issue or PR as relevant to SIG Multicluster. label Sep 16, 2022
@pwschuurman
Copy link
Contributor Author

Is there the future compatibility for disjoint slices with the same name?

The use case is like this: Initially, we schedule foo to cluster A with 5 replicas and cluster B with 8 replicas, so cluster A has [0, 5) and cluster B has [5, 13). Later on, cluster B is running out of spare resources (still sufficient for the 8 replicas), but cluster A has got more resources. When we want to scale up foo from 13 replicas to 16 replicas, we want to have cluster A with replicas [0, 5) U [13, 16) and cluster B with replicas [5, 13). It is not possible to create two StatefulSets because we cannot have two StatefulSets with the same name, such that foo-4 and foo-13 get created but foo-8 doesn't.

A single replicaStartIndex would be confusing if this feature is ever added in the future.

Disjoint sets did come up in design, but it wasn't needed for the two use cases described in the original KEP. In addition to your use case, it could be useful in testing (eg: omitting a particular ordinal of a StatefulSet). The challenge with integrating an API for disjoint sets is that the replicas ordinal may no longer be useful (either the disjoint set API is used, or the single set with replicas is used). Consider a scenario where the StatefulSet spec is augmented with an array of slices ([0, 5), [13, 16)), to allow for a disjoint set. In that scenario, replicas may carry redundant information (eg: does it refer to the total # of replicas, which is 8)?

If we do want disjoint sets, I think introducing a new CRD that represents the slice would make more sense (eg: similar to how EndpointSlices came to be: https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/), and introducing an API into StatefulSet to group of slices.

@pwschuurman
Copy link
Contributor Author

@JeremyOT @pmorie for sig-multicluster visibility

Copy link
Contributor

@lauralorenz lauralorenz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few things to add after cross-referencing the 1.26 enhancements announcement.

FYI actual freeze dates are 9/29 for PRR review requests, 10/6 for enhancements freeze

cc @JeremyOT @pmorie

keps/sig-multicluster/3335-statefulset-slice/kep.yaml Outdated Show resolved Hide resolved
reviewers:
- TBD
approvers:
- TBD
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC based on SIG-MC comments today, sounds like this needs at least a SIG-MC chair and a SIG-Apps chair

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The above lists should be filled in, otherwise we don't know whom do you expect to approve this.

keps/prod-readiness/sig-multicluster/3335.yaml Outdated Show resolved Hide resolved
@pwschuurman pwschuurman force-pushed the statefulset-slice branch 3 times, most recently from d12f495 to 1778711 Compare September 21, 2022 00:14
```


To move two pods, the `my-app` StatefulSet in the `shared` namespace can be scaled down to `replicas: 3, replicaStartOrdinal: 0`, and an analogous StatefulSet in the `app-team` namespace scaled up to `replicas: 2, replicaStartOrdinal: 3`. This allows for pod ordinals to be managed during migration. The application operator should manage network connectivity, volumes and slice orchestration (when to migrate and by how many replicas).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'm missing why the ordinal is the problem in this use case. Are you trying to make sure that the workload within the namespace can only look at the pod name to determine what slice they are? There seem to be some assumptions about how the workload uses the ordinal that would be better to make explicit.

My naive (not understanding the context) answer to this problem would be: "the pod name is already namespaced, so if your workload is using ordinal to separate the workload you should really be using namespace + pod name + ordinal" - i.e. that the workload is making an unsafe assumption that should be addressed.

In general, we don't support any cross namespace reference on workload, so any details you can add here to explain how the ordinal is used would help.

Copy link
Contributor Author

@pwschuurman pwschuurman Sep 21, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, we don't support any cross namespace reference on workload, so any details you can add here to explain how the ordinal is used would help.

This makes sense, and under steady state conditions I would agree. The problem having a workload split cross namespace or cluster is for re-organization, moving it to a new ownership paradigm. The only supported way to handle this today is re-creation of a workload in a new namespace/cluster, which results in workload downtime.

My naive (not understanding the context) answer to this problem would be: "the pod name is already namespaced, so if your workload is using ordinal to separate the workload you should really be using namespace + pod name + ordinal" - i.e. that the workload is making an unsafe assumption that should be addressed.

The practical use is to have ordinal to separate the workload across organization units (cluster/namespace), but not to separate the workload logically.

that the workload is making an unsafe assumption that should be addressed.

What is the unsafe assumption from your perspective? My takeaway about safety is that this can break at-most-one semantics in a StatefulSet. If the workload is representing the same logic replica ordinal across organization units (namespace/cluster), this isn't an invariant within Apiserver to enforce this. It would be up to business logic of an application operator to enforce this invariant, rather than API invariants.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only supported way to handle this today is re-creation of a workload in a new namespace/cluster, which results in workload downtime.

So you are looking to orchestrate cross cluster moves of statefulsets without downtime - are you trying to do this for all statefulsets, or just those that have been adapted to support movement? Some of these assumptions are that context I was looking for, which helps me understand how you intend to use this and how to review it.

Regarding unsafety, someone orchestrating moves where the workload is ordinal aware (the user has encoded assumptions about the ordinal into the state of each instance and may use them arbitrarily) will need to be very careful, and if the goal is that all statefulsets can be orchestrated, there may be assumptions about namespace / cluster factored in that do require the user to modify their workload.

To be safe, you also have to verify you have observed the deletion of the “high” ordinals on the source cluster, which if I better understood (perhaps there is a doc somewhere with your orchestration design?) might also have other implications about what changes we need.

One final reason I asked the general question - we decided to limit configurability of ordinals originally for simplicity. Adding more control would open the door for us to ensure other use cases were not limited by “just adding the start” (an example would be topology aware ordinal assignment), but that might impact the your described negatively.

In general re: statefulset movement, it’s not unreasonable to ask the workload author to fit within some constraints, and “deal with overlapping ordinals” might be a better overall approach if we ended up helping the user to avoid coupling too strongly to a single cluster’s representation. I just want to be sure we explore that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One more comment - is the ordinal important for networking rules? That was the heart of my question about cross-namespace - if you are attempting to move across clusters or namespaces you are giving up the ability to maintain a network quorum by looking at the service or the pod name via dns, which is one of the signature properties of stateful workloads under statefulsets. If you’re willing to live without that, then the ordinal seems almost as unimportant (because you can’t magically make pod-0 in ns1 be unaware of the change in pod-7 to ns2). Unless you are proposing the orchestrator handle that as well?

Knowing what level of intent you have to make this transparent helps refine how critical this is.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So you are looking to orchestrate cross cluster moves of statefulsets without downtime - are you trying to do this for all statefulsets, or just those that have been adapted to support movement?

Only looking to move those that have been adapted to support movement. Any StatefulSet that is moved across cluster will need to be multi-cluster aware on some level, as the actual workload running will need to have some way to connect to peers across cluster. This could be transparent to the actual application running (eg: if using multi-cluster services), but it would require a Kubernetes Administrator or Application Administrator to configure networking across clusters.

To be safe, you also have to verify you have observed the deletion of the “high” ordinals on the source cluster, which if I better understood (perhaps there is a doc somewhere with your orchestration design?) might also have other implications about what changes we need.

Correct, the orchestration logic would manage the correct order of operations (deleting the "high" ordinal on the source cluster, before provisioning the "high" ordinal to the destination cluster).

One final reason I asked the general question - we decided to limit configurability of ordinals originally for simplicity. Adding more control would open the door for us to ensure other use cases were not limited by “just adding the start” (an example would be topology aware ordinal assignment), but that might impact the your described negatively.

The underlying mechanics I'm trying to achieve is this KEP is to allow a StatefulSet to be scaled down from N -> 0 in a source cluster (scaling down ordinal "N-1" first), and scaled up from 0 -> N in the destination cluster (scaling up ordinal "N-1" first), pod-by-pod. Allowing a StatefulSet to start from an arbitrary ordinal might be adding too much configurability. This could also be achieved with a reverse scale-up flag (eg: scale up "N-1" instead of "0", as is the default with OrderedReady pod management today).

In general re: statefulset movement, it’s not unreasonable to ask the workload author to fit within some constraints, and “deal with overlapping ordinals” might be a better overall approach if we ended up helping the user to avoid coupling too strongly to a single cluster’s representation. I just want to be sure we explore that.

Workloads could be aware of this, and have a failover or promotion mechanism that allows a pod ordinal to relinquish control from the source to destination cluster. However one of the constraints I was trying to avoid surrounds PV/PVC re-use across clusters. If pods are referencing the same PV/PVC (as is a typical scenario with a RWO PV in a StatefulSet), the destination cluster wouldn't have a way of blocking binding attempts of the existing PV/PVC to a node in the destination cluster when a ordinal overlaps with a running ordinal in the source cluster. This guard could be handled at the storage level, but it's not as elegant of a design, and requires the orchestrator to communicate through a side-channel with the storage layer.

One more comment - is the ordinal important for networking rules? ... Unless you are proposing the orchestrator handle that as well?

The ordinal is very important for networking rules, since it allows peers to map an endpoint to data owned by a replica at that ordinal. If using multi-cluster headless services, this can be achieved when querying a headless endpoint. So ideally pod-0 in ns1 would be aware to the change to pod-7 in ns2 through DNS discovery (eg: through a multi-cluster headless service), or having pod-7 in ns2 connect its previously known quorum group to update its new FQDN. The goal of having this migration story is to allow for a StatefulSet workload to be migrated, without the actual application being fully aware or involved with the cross-cluster move. Backup/restore alternatives exist, as well as application specific migrations tooling (eg: etcd mirror maker, Redis migrate), but these incur downtime, application specific co-ordination and data over network transfer costs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, will review your responses in more detail before I respond. One immediate question:

Allowing a StatefulSet to start from an arbitrary ordinal might be adding too much configurability. This could also be achieved with a reverse scale-up flag (eg: scale up "N-1" instead of "0", as is the default with OrderedReady pod management today).

Another option might be to put a stateful set in a "paused mode" or to add a way for individual ordinals to be marked as "paused" at the workload object level, then orphan the pods one by one viat he orchestrator. The ability for an admin to "freeze" a particular ordinal might very well be generally useful for debugging or recovery, and we might be able to find ways to overlap that use case with other operator behavior today. I know that currently if you wanted to perform debugging on a particular pod, you might want to ensure that upgrades / etc are blocked / held by that pod, or test how the workload works in the absence of that pod. I don't know how complex that might get, but it would be one way to overlap the ask with something that grows the utility of workloads on a single cluster as well.

Copy link
Contributor Author

@pwschuurman pwschuurman Sep 26, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another option might be to put a stateful set in a "paused mode" or to add a way for individual ordinals to be marked as "paused" at the workload object level ... The ability for an admin to "freeze" a particular ordinal might very well be generally useful for debugging or recovery, and we might be able to find ways to overlap that use case with other operator behavior today

If there is an API to freeze an ordinal level (eg: KEP-3521 or otherwise), the mechanics of pausing a pod ordinal could be very similar to the workflow I described here: #3522 (comment).

@pwschuurman
Copy link
Contributor Author

The goal of this KEP from the use cases described is to isolate slices of replicas across two StatefulSets. For an application orchestrator to migrate replicas, it can scale down replicas in StatefulSet A, and scale replicas up in StatefulSet B. For a StatefulSet using OrderedReady PodManagement, pods are scaled up starting for pod "0" to pod "N-1", (where there are N replicas), and scaled down from pod "N-1" to pod "0".

@soltysh raised the concern that this KEP introduces a significant change to the StatefulSet APIs. To reduce the complexity of this API change and implementation, rather than introducing a .spec.replicaStartOrdinal, a new PodManagement policy could be introduced, named ReverseOrderedReady. This would have the same semantics as OrderedReady, but scaling pods in from "N-1" to "0". This would allow an orchestrator to complement the scale down of an OrderedReady StatefulSet with the scale up of an ReverseOrderedReady StatefulSet. This would narrow the API scope of this KEP compared to the proposed API .spec.replicaStartOrdinal.

Once migrated, if an orchestrator that brings up a StatefulSet with ReverseOrderedReady wants to change the StatefulSet to match OrderedReady (as in the original cluster), the StatefulSet would need to be deleted (with --cascade=orphan) and re-created with the updated PodManagement policy.

@pwschuurman
Copy link
Contributor Author

I'm fine with the KEP from a technical pov from sig-apps side, but you need to update the metadata so that it matches reality (approvers, reviewers, owning sig, participating sig, PRR approver) so folks can approve this.

Thanks @soltysh, updated the metadata to match the current state of reviewers and owning sig.

Copy link
Contributor

@soltysh soltysh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/approve
although the verify failures is to be fixed

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 6, 2022
Add first draft of KEP for StatefulSet Slice
@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 6, 2022
@pwschuurman
Copy link
Contributor Author

although the verify failures is to be fixed

Thanks @soltysh, updated the README.md TOC.

@wojtek-t
Copy link
Member

wojtek-t commented Oct 6, 2022

/approve PRR

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: pwschuurman, soltysh, wojtek-t

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 6, 2022
@wojtek-t
Copy link
Member

wojtek-t commented Oct 6, 2022

/lgtm

/hold
for @smarterclayton - to confirm

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Oct 6, 2022
@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 6, 2022
@smarterclayton
Copy link
Contributor

/lgtm

as well, thank you for bearing with me as we threaded the future growth of ordinal needle.

/hold cancel

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/multicluster Categorizes an issue or PR as relevant to SIG Multicluster. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.