You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Due to this issue kubernetes/autoscaler#6490 (comment) the cluster autoscaler sometimes gets stuck in a loop where it thinks it doesn't have enough privileges to continue.
Deleting the pod gets it going again, but this depends on someone noticing it.
I propose that we either monitor the autoscaler every day to detect when it gets stuck OR we add the RBAC the controller thinks it needs until the upstream issue is fixed.
The text was updated successfully, but these errors were encountered:
Unless the RBAC it thinks it needs is very invasive, I think that is better workaround than constant manual monitoring. If implemented with RBAC, let's make sure we have revert PR or issue for reverting the change available right after merge.
Due to this issue kubernetes/autoscaler#6490 (comment) the cluster autoscaler sometimes gets stuck in a loop where it thinks it doesn't have enough privileges to continue.
Deleting the pod gets it going again, but this depends on someone noticing it.
I propose that we either monitor the autoscaler every day to detect when it gets stuck OR we add the RBAC the controller thinks it needs until the upstream issue is fixed.
The text was updated successfully, but these errors were encountered: