-
Notifications
You must be signed in to change notification settings - Fork 831
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(dataflow): Add OAuth support for Kafka SASL to the dataflow engine #5111
feat(dataflow): Add OAuth support for Kafka SASL to the dataflow engine #5111
Conversation
This includes includes the values which need to be set, but without providing the client ID and secret, and so on. This has yet to be done.
This will replace the existing one, splitting the responsibilities between retrieving secrets and interpreting their contents. This will enable the upcoming SASL OAuth secret handler to reuse the underlying k8s secret provider. This implementation is also simpler than before and provides logging.
This provides flexibility around testing while maintaining the convenience of a readily-available object.
This is for the same reason as for the SASL password provider. This is a lazy instantiation.
I had some fun with this error and some other errors in the dataflow engine logs:
This one was caused by using There were also issues about null keys being defined in the JAAS config, which turned out to be a formatting issue where the config string wasn't terminated with a semi-colon; I also added quotes around values to adherence to the specification. Presumably the parser expected more keys, then complained when no key was found, rather than complaining about a missing terminator, as in some issues one might find on the topic. |
…crets in dataflow engine
This ensures only the relevant config values can be used in any given logic. It also allows for validation in the future when creating configs, e.g. to reject missing parameters.
This new default is in line with the Kubernetes setup, as provided by Helm, which makes use of a secret rather than a directly-mounted file.
82c0fa8
to
8a0130c
Compare
For testing, I've created a suite of Ansible YAML overrides to simplify matters and for reproduciblity, rather than having to edit Helm releases and provide lots of command-line overrides. These have been applied with the following Ansible commands. The ecosystem setup was not re-run when using the cluster-external Kafka provider (for SASL/PLAIN and SASL/OAUTHBEARER) but was for in-cluster Strimzi Kafka (for PLAINTEXT and mTLS). ansible-playbook playbooks/setup-ecosystem.yaml -e full_install=no -e install_kafka=yes -e @.extra/values-scv2-plaintext.yaml
ansible-playbook playbooks/setup-seldon.yaml -e seldon_install_servers=no -e @.extra/values-scv2-plaintext.yaml |
Superseded by #5127 |
Why
Motivation
This PR provides the dataflow engine/Kotlin equivalent of the Go changes provided by Rafal. This is about adding support for the
OAUTHBEARER
SASL mechanism when connecting to Kafka clusters.References
What
Outstanding tasks
client.secret
instead ofclient.oauthSecret
security.kafka.sasl.client.passwordPath
from/tmp/sasl/kafka/client/password
topassword
as this is a k8s secret field, not a file path in this contextChanges
OAUTHBEARER
SASL mechanism when connecting to Kafka.Testing
Create fresh
kind
cluster:Protocol = PLAINTEXT
Protocol = SASL_SSL, mechanism = OAUTHBEARER
This now works, with the dataflow engine getting itself to a running state without crashing.
Protocol = SASL_SSL, mechanism = SCRAM-SHA-512
I needed to delete the Strimzi setup and its PVCs as SCRAM is not supported in Strimzi with KRaft.
I then used the below values as Ansible overrides and rolled out Kafka and the Core v2 installation again:
.extra/values-scv2-sasl-scram.yaml
This outputs logs like: