Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

As a Korifi user I do not want my application to restart when I bind/unbind a service instance to it #3087

Open
georgethebeatle opened this issue Feb 6, 2024 · 5 comments
Labels
bug Something isn't working disruptive

Comments

@georgethebeatle
Copy link
Member

Background

In CF for vms when a user binds a service instance to an app the app does not immediately see the new service binding but needs to be restarted before the updated value of the VCAP_SERVICES env var is available to it. This is normal and expected as the process env is immutable. The cf cli outputs a tip with regards to this behaviour.

Binding service instance s to app dorifi in org org1 / space space2 as cf-admin...
TIP: Use 'cf restage dorifi' to ensure your env variable changes take effect

In Korifi binding a service to an app means two things:

  • Update the value of the VCAP_SERVICES env var (same as CF for VMs)
  • Create a servicebinding.servicebinding.io object that mounts the datails of the service instance we are binding to as a mountpoint in the resulting pod where the application is running. This is a new pure kubernetes way of consuming the service instance and is specified in the servicebinding.io spec

What the servicebinding object does is that it injects new etries in the volumes and volumeMounts fields of the Pod or the PodTemplateSpec which effectively restarts the pod. This results in Korifi behaving slightly differently to classic CF by doing unexpected app restarts which might mean downtime for the users of the app. Another inconvenience is that if the user wants to bind n service instances to the app the app will be restarted n times.

Reproduction steps

❯ cf bind-service dorifi s
Binding service instance s to app dorifi in org org1 / space space2 as cf-admin...
OK

TIP: Use 'cf restage dorifi' to ensure your env variable changes take effect

❯ cf apps
Getting apps in org org1 / space space2 as cf-admin...

name     requested state   processes   routes
dorifi   started           web:0/1     dorifi.apps-127-0-0-1.nip.io

Acceptance

GIVEN I have deployed an app a to Korifi and have created a service instance si
WHEN I run cf bind-service a si
THEN I can continue using the app without experiencing any downtime

@georgethebeatle georgethebeatle added the bug Something isn't working label Feb 6, 2024
@github-project-automation github-project-automation bot moved this to 🧊 Icebox in Korifi - Backlog Feb 6, 2024
@georgethebeatle georgethebeatle changed the title As a Korifi user I do not want my application to restart when I bind a service instance to it As a Korifi user I do not want my application to restart when I bind/unbind a service instance to it Feb 7, 2024
@danail-branekov
Copy link
Member

For context, app restart does not cause downtime in cases when the app is scaled to more than one instance.

When the statefulset gets updated by servicebinding.io, the statefulset controller restart its pods one by one (similarly to what we do on rolling restart)

@georgethebeatle
Copy link
Member Author

We discussed this issue on the last working group sync and decided that it is not too critical, because

  • As explained in the previous comment the restart is done in a highly available manner, restarting instances one by one, meaning that no downtime will be incurred for apps running more than one instance.
  • Binding services is usually done as part of the initial setup of the application and it is unlikely that there is real usage.
  • In order to avoid multiple restarts when binding multiple services one could make use of app manifests. This is likely to already be the case for apps that use multiple services.
  • Despite tha fact that restarting the app on service bind is a change in behaviour between traditional CF and Korifi it is not too critical, because you have to restart your app to make it see the new binding anyway.
  • We spiked on making Korifi behave exactly like traditional CF, but it turned out that there is no reliable way to implement this, because of the declarative nature of Kubernetes. It is very hard for our controllers to find out if the app is being started and doing so would go against the best practices of writing controllers.

Given all of the above we think that it would be better to document the fact that Korifi would restart workloads when one binds services to them in the section dealing with differences from CF.

@georgethebeatle
Copy link
Member Author

Given that we decided to accept the fact that binding/unbinding an app to a service instance would automatically restart the app, should we also restart the app when the service instance credentials change? This way we could simply document that korifi automatically restarts the apps whenever service details chages so that the app can immeditely see the change. Wouldn't such a behaviour be more consistent than auto restarting on bind/unbind but requiring a manual restart on credentials chage.

We spiked a bit on this idea and push the relevant changes here

@georgethebeatle georgethebeatle moved this from 🧊 Icebox to 🇪🇺 To do in Korifi - Backlog Feb 15, 2024
@ggerla
Copy link

ggerla commented Feb 22, 2024

the CF binding is composed by 2 phases:

  1. VCAP_SERVICES env var
  2. restage/restart

So my proposal here is

  1. on the cf bind-service command generate VCAP_SERVICES env vars and store them in a secret/configmap
  2. on the cf restage command read vars from secret/configmap, prepare and deploy servicebinding.servicebinding.io object that will automatically restart the app.

In this way the restart in not done on the binding (not expected) but at the restage/restart time (expected).

@danail-branekov
Copy link
Member

In order to achieve the behaviour you suggest we would have to change the workload of the servicebinding.io binding. Currently, the binding workload is the statefulset. This results into servicebinding.io updating the statefulset pod template via its webhook which immediately triggers the statefulset controller to restart the pods.

We had an idea to change the binding workload to another resource (e.g. the AppWorkload), however:
a) we need to spike on whether such an approach works at all
b) have to ensure that existing bindings still work after Korifi upgrade
c) if possible, avoid pod restarts during upgrade

Once/If we prove that such an approach is feasible, then we could change the appworkload (or whatever target resource we choose) controller to update the statefulset pod template only on restart/restage.

Bonus point of such an approach would be that Korifi makes no assumptions that workloads are backed up by statefulsets. This assumption currently is an issue if someone wants to bring their own workload runners (for example, a workload runner that uses deployments instead of statefulsets)

@georgethebeatle georgethebeatle moved this from 🇪🇺 To do to 🧊 Icebox in Korifi - Backlog Mar 13, 2024
@georgethebeatle georgethebeatle moved this from 🧊 Icebox to 🇪🇺 To do in Korifi - Backlog Mar 13, 2024
@georgethebeatle georgethebeatle moved this from 🇪🇺 To do to 🧊 Icebox in Korifi - Backlog Mar 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working disruptive
Projects
Status: 🧊 Icebox
Development

No branches or pull requests

3 participants