This document outlines the design of an Ingress Controller for j8a inside a
Kubernetes cluster. The Ingress Controller is responsible for managing incoming network traffic to service
objects within the
cluster, providing a highly available, j8a based entry point for external clients to access the cluster's applications. The controller utilizes the
ingress
resource, along with other Kubernetes objects such as service
, configMap
, and secret
, to facilitate routing
and load balancing of network traffic.
ingress-j8a
is a kubernetes ingress controller pod, exposing ports 80, 443 of managed j8a pods, making the cluster accessible to the internet. It generates the configuration
objects for j8a, keeps those configurations updated and manages instances of j8a within the cluster.
ingress-j8a
talks to kube apiserver via the golang kubernetes client and authenticates internal to the cluster withj8a-serviceaccount
that is deployed together with the ingresscontroller. Thej8a-serviceaccount
has an associatedj8a-clusterrole
andj8a-clusterrolebinding
to give it minimum privileges required to access cluster-wideingress
ingressclass
service
configMap
andsecret
resources required.ingress-j8a
consumes cluster usersingress
resources from all namespaces for theingressClass
j8aingress-j8a
creates the ingressClass resource that specifies the controller implementation itself.- J8a metadata (🚧 timeouts?) is controlled by modifying this resource and specifying
spec.parameters.key
that reconfigure j8a
- J8a metadata (🚧 timeouts?) is controlled by modifying this resource and specifying
ingress-j8a
creates adeployment
of j8a into the cluster by talking to the kubernetes API server.- once
ingress-j8a
is undeployed, the dependent deployment of j8a pods will remain. upon re-deploy the controller recognizes the existing deployment. - Pods use off-the-shelf j8a images from dockerhub.
- Proxy config is passed via env internally.
- When proxy config needs to change, the deployment is updated with the contents of the env variable.
- once
ingress-j8a
allocates aservice
of type loadbalancer that forwards traffic to the proxy server pods.- j8a
pod
itself exposes ports 80 and 443 on it's clusterIp (depends on config from ingress.yml). It is accessed externally via the outer load balancer. - j8a routes traffic to pods that are mapped by translation of
service
urls to actual pods inside the cluster.
- Zero downtime deployments for j8a during updates to all cluster resources.
- Redundancy for j8a with multiple proxy server instances and a load balancing mechanism
- Intelligent defaults for j8a for proxy server params the kubernetes ingress resource does not readily expose.
The basic mechanics of monitoring kubernetes for configuration changes, then updating J8a's config and it's live traffic routes.
- The user deploys
ingress
resources to the cluster, or updates them. This is similar for dependent resources such asconfigMap
andsecret
that are used by theingress
resources. The user is allowed to deploy these at any time. - A cache that runs inside
ingress-j8a
monitors for updates to kube resources in all namespaces. It pulls down the latest resources, caches them, then versions its own config. This mechanism has an idle wait safeguard to protect against versioning too frequently. - The control loop inside
ingress-j8a
that continuously waits for config changes is notified (this idea is borrowed from ingress-nginx). - The control loop reads the versioned, cached config out and generates a j8a config object in yml format. This is based on a template of the j8a config, filled in using go {{template}} variables. The result will be deployed to the kube cluster as its own configmap object in the j8a namespace.
ingress-j8a
then deploys theconfigMap
as a resource to the kube api server and keeps it updated for subsequent changes.- kube api server deploys this resource into the cluster and maintains it there as a source of truth for the current config, outside of the cache of
ingress-j8a
. ingress-j8a
then tells kube api server to deploy the latest docker image of j8a into the cluster using this config. It updates the current deployment for j8a and deploys new pods into that deploying using a rolling configuration update.- kube apiserver updates the
deployment
using the passed in config via the descriptor. pods are updating by creation of a newreplicaset
(not pictured) that scales up while the old one scales down. - kube apiserver updates the
service
with alabelselector
to tell the loadbalancer about the new proxy pods with their updated config.
The ingress-j8a team welcomes all contributors. Everyone interacting with the project's codebase, issue trackers, chat rooms and mailing lists is expected to follow the code of conduct