-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DPE-3777] Add node port support #264
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #264 +/- ##
==========================================
- Coverage 72.24% 71.90% -0.35%
==========================================
Files 8 8
Lines 1081 1075 -6
Branches 185 182 -3
==========================================
- Hits 781 773 -8
- Misses 230 235 +5
+ Partials 70 67 -3 ☔ View full report in Codecov by Sentry. |
b8817ce
to
061e8e4
Compare
2124f5f
to
061e8e4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, but up to @dragomirp here!
] | ||
}, | ||
application_name=PGB, | ||
num_units=1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed in private, IMHO: we must test 2-3 units here. As in production we have 2+ PGB units for HA.
I believe each client app have to deploy independent mysql-router (for VMs for sure... but I would keep this requirements for K8s as well). Example: pgb normally should be deployen in app mode, so wordpress in model A will have pgb in the same model, but data-integrator will in the DB model OR separate (but outside of model A => different PGB in the real deployment).
Sure, it theory the same PGB should work for 2+ apps... but create a separate bugreport if it is broken here... and we should re-check legacy interface behavior here (app1:psql + app2:postgresql_client => BOOM?).
The node port is created if at least one single unit requests it using
external-node-connectivity=True
viadatabase
relation. Only applications that submit that flag asTrue
receive the nodeport information. Others will receive the internal ClusterIP.The charm internally checks at
update_connection_info()
if a given relation needs the NodePort, and updates the service accordingly withpatch_port(True)
.Periodically, the charm reviews if node port is still necessary. runs:
patch_port
withexternal_connectivity
output. If all relations are either going away or do not haveexternal-node-connectivity=True
, then node port is revoked. This logic is called on all main events changing the state of the charm.