-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Postgresql connections #11
Comments
Hi @marosg42 , it is a duplicate of MM reply. The option experimental_max_connections is really an |
Hi @marosg42, I just added a draft PR that is including pgbouncer in PostgreSQL installation with maas-anvil. As I mentioned inside, for the moment it is blocked by this issue: canonical/pgbouncer-operator#245 |
@skatsaounis please share the test results for #16 Thank you! |
Hi @taurus-forever . As you can observe in the linked PR's checks: https://github.com/canonical/maas-anvil/actions/runs/9480678236/job/26510157069?pr=16, we have a successful run. When I was informed that a new revision is released in pgb 1/edge, I re-triggered the failed job and it succeeded. This job is producing a complete single node anvil deployment. If it ends with all charmed apps in active status that means |
Hi @taurus-forever, @dragomirp, @delgod After the merge of the initial PR #16 which introduced pgbouncer and some test runs with the change included in the anvil snap, we came to the conclusion that inevitably we have to choose pool_mode Since this change leads to database restarts, we also considered that if the maas-anvil practitioner knows beforehand the total number of regions, then they can set the corresponding The linked #32 is trying to handle all the above configuration choices. Hopefully, we will be able to test it and confirm whether is the appropriate solution for charmed MAAS use case. I will keep you updated. |
In addition, this is a diagram shared by @dragomirp that includes pgbouncer, postgres cluster and 3 MAAS regions. This is the outcome of setting:
|
When I run a lot of MAAS commands I eventually get
FATAL: remaining connection slots are reserved for non-replication superuser connections
This workaround worked for me, of course I don't know the magic number and parameter is currently in beta only
The text was updated successfully, but these errors were encountered: