Kubernetes operator for that acts as control plane to manage the complete deployment lifecycle of Apache Flink applications. This is an open source fork of GoogleCloudPlatform/flink-on-k8s-operator with several new features and bug fixes.
Beta
The operator is under active development, backward compatibility of the APIs is not guaranteed for beta releases.
- Version >= 1.16 of Kubernetes
- Version >= 1.7 of Apache Flink
- Version >= 1.5.3 of cert-manager
The Kubernetes Operator for Apache Flink extends the vocabulary (e.g., Pod, Service, etc) of the Kubernetes language with custom resource definition FlinkCluster and runs a controller Pod to keep watching the custom resources. Once a FlinkCluster custom resource is created and detected by the controller, the controller creates the underlying Kubernetes resources (e.g., JobManager Pod) based on the spec of the custom resource. With the operator installed in a cluster, users can then talk to the cluster through the Kubernetes API and Flink custom resources to manage their Flink clusters and jobs.
- Support for both Flink job cluster and session cluster depending on whether a job spec is provided
- Custom Flink images
- Flink and Hadoop configs and container environment variables
- Init containers and sidecar containers
- Remote job jar
- Configurable namespace to run the operator in
- Configurable namespace to watch custom resources in
- Configurable access scope for JobManager service
- Taking savepoints periodically
- Taking savepoints on demand
- Restarting failed job from the latest savepoint automatically
- Cancelling job with savepoint
- Cleanup policy on job success and failure
- Updating cluster or job
- Batch scheduling for JobManager and TaskManager Pods
- GCP integration (service account, GCS connector, networking)
- Support for Beam Python jobs
The operator is still under active development, there is no Helm chart available yet. You can follow either
- User Guide to deploy a released operator image on
ghcr.io/spotify/flink-operator
to your Kubernetes cluster or - Developer Guide to build an operator image first then deploy it to the cluster.
- Manage savepoints
- Use remote job jars
- Run Apache Beam Python jobs
- Use GCS connector
- Test with Apache Kafka
- Create Flink job clusters with Helm Chart
- Run Python job using pyflink API
Please check CONTRIBUTING.md and the Developer Guide out.