-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds Google Cloud SQL Auth Proxy Support #372
base: master
Are you sure you want to change the base?
Conversation
When the postgresql DB is hosted on Google Cloud SQL and hosted on a GKE Cluster, the can be a need to use the Cloud SQL Auth Proxy. Using the Auth Proxy is also the recommended approach to connect to Cloud SQL from GKE (https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine) The mechanism for controlling the upgrading of the database has also been modified. .Values.postgresql.upgradeDb takes a boolean value and is used to control whether the graphql node will run the db upgrade script or not.
Thanks for contributing! I'm hesitant to accept such a change for such a specific use-case. Is it not feasible to inject the sidecar using more general settings and just document the necessary configuration for CloudSQL? Separately, I'm wary of changes to the database upgrade init container. Why was this necessary for this feature? |
No problemo! I'm not a kubernetes expert at all. How would one inject a sidecar using more general settings? If this can be done and documented I'd see that as a viable option going forward. Except, the database upgrade init container also needs a connection to the database and from my experimentation I could not add a sidecar to the init container to establish the proxy. This meant that the init container could not run successfully and hence stopped the graphql component from being able to start. Removing the init container and placing the init process back into the starting of the graphql container resolved this issue as the cloud_sql_proxy container could be started and the graphql container would then use the proxy to connect to the database. |
See https://github.com/bitnami/charts/blob/ce117454d2a3bd027a97083b28deaea608af8de5/bitnami/postgresql/values.yaml#L553-L563 for example.
This is a great justification for moving the database migrations into the main container. I'm not sure what the implications of this are or why the migrations are run in an init container in the first place. I'll check with the team, but it seems like it will be reasonable to change. |
Ah I see. Yes that would be a great way to do this part. Rather then allowing for a specific sidecar of the GCE proxy, allow for any sidecars. This would require the addition of a sidecars variable in the values and a change to the container yaml to inject the configured sidecars, if any. Let me know which way the team wants to go and then I can adjust the approach of this pull request. Appreciate you considering this further! |
Summary
Adds support to the Helm charts for using the Google Cloud SQL Auth Proxy
Importance
Under certain conditions, connectivity to a Google Cloud SQL host database can only
be achieved by using the Cloud SQL Auth Proxy. To utilise the Auth Proxy, a sidecar
container needs to be deployed along with the container that requires a connection
to the database.
Using the Auth Proxy as a sidecar is the recommended approach to connect from
a Deployment hosted on a Google Kubernetes Engine to a Cloud SQL instance.
See https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine#introduction
Checklist
This PR:
adds new tests (if appropriate)changes/
directory (if appropriate)