Skip to content

Commit

Permalink
v0.1.2: add Kubernetes resource requests/limits; refactor into Hairpi…
Browse files Browse the repository at this point in the history
…nProxyController class for readability
  • Loading branch information
compumike committed Nov 8, 2020
1 parent 29b04f6 commit 1d4a2b4
Show file tree
Hide file tree
Showing 3 changed files with 66 additions and 41 deletions.
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ If you've had problems with ingress-nginx, cert-manager, LetsEncrypt ACME HTTP01
## One-line install

```shell
kubectl apply -f https://raw.githubusercontent.com/compumike/hairpin-proxy/v0.1.1/deploy.yml
kubectl apply -f https://raw.githubusercontent.com/compumike/hairpin-proxy/v0.1.2/deploy.yml
```

If you're using [ingress-nginx](https://kubernetes.github.io/ingress-nginx/) and [cert-manager](https://github.com/jetstack/cert-manager), it will work out of the box. See detailed installation and testing instructions below.
Expand Down Expand Up @@ -46,10 +46,10 @@ None of these are particularly easy without modifying upstream packages, and the

## The hairpin-proxy Solution

1. hairpin-proxy intercepts and modifies cluster-internal DNS lookups for hostnames that are served by your ingress controller, pointing them to the IP of an internal `hairpin-proxy-haproxy` service instead. (This is managed by `hairpin-proxy-controller`, which simply watches the Kubernetes API for new/modified Ingress resources, examines their `spec.tls.hosts`, and updates the CoreDNS ConfigMap when necessary.)
1. hairpin-proxy intercepts and modifies cluster-internal DNS lookups for hostnames that are served by your ingress controller, pointing them to the IP of an internal `hairpin-proxy-haproxy` service instead. (This DNS redirection is managed by `hairpin-proxy-controller`, which simply polls the Kubernetes API for new/modified Ingress resources, examines their `spec.tls.hosts`, and updates the CoreDNS ConfigMap when necessary.)
2. The internal `hairpin-proxy-haproxy` service runs a minimal HAProxy instance which is configured to append the PROXY line and forward the traffic on to the internal ingress controller.

As a result, when pod in your cluster (such as cert-manager) try to access http://your-site/, they resolve to the hairpin-proxy, which adds the PROXY line and sends it to your `ingress-nginx`. The NGINX parses the PROXY protocol just as it would if it had come from an external load balancer, so it sees a valid request and handles it identically to external requests.
As a result, when pods in your cluster (such as cert-manager) try to access http://your-site/, they resolve to the hairpin-proxy, which adds the PROXY line and sends it to your `ingress-nginx`. The NGINX parses the PROXY protocol just as it would if it had come from an external load balancer, so it sees a valid request and handles it identically to external requests.

## Installation and Testing

Expand All @@ -60,7 +60,7 @@ Let's suppose that `http://subdomain.example.com/` is served from your cluster,
Get a shell within your cluster and try to access the site to confirm that it isn't working:

```shell
k run my-test-container --image=alpine -it --rm -- /bin/sh
kubectl run my-test-container --image=alpine -it --rm -- /bin/sh
apk add bind-tools curl
dig subdomain.example.com
curl http://subdomain.example.com/
Expand All @@ -72,12 +72,12 @@ The `dig` should show the external load balancer IP address. The first `curl` sh
### Step 1: Install hairpin-proxy in your Kubernetes cluster

```shell
kubectl apply -f https://raw.githubusercontent.com/compumike/hairpin-proxy/v0.1.0/deploy.yml
kubectl apply -f https://raw.githubusercontent.com/compumike/hairpin-proxy/v0.1.2/deploy.yml
```

If you're using `ingress-nginx`, this will work as-is.

If you using an ingress controller other than `ingress-nginx`, you must change the `TARGET_SERVER` environment variable passed to the `hairpin-proxy-haproxy` container. It defaults to `ingress-nginx-controller.ingress-nginx.svc.cluster.local`, which specifies the `ingress-nginx-controller` Service within the `ingress-nginx` namespace. You can change this by editing the `hairpin-proxy-haproxy` Deployment and specifiying an environment variable:
However, if you using an ingress controller other than `ingress-nginx`, you must change the `TARGET_SERVER` environment variable passed to the `hairpin-proxy-haproxy` container. It defaults to `ingress-nginx-controller.ingress-nginx.svc.cluster.local`, which specifies the `ingress-nginx-controller` Service within the `ingress-nginx` namespace. You can change this by editing the `hairpin-proxy-haproxy` Deployment and specifiying an environment variable:

```shell
kubectl edit -n hairpin-proxy deployment hairpin-proxy-haproxy
Expand Down
18 changes: 16 additions & 2 deletions deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,15 @@ spec:
app: hairpin-proxy-haproxy
spec:
containers:
- image: compumike/hairpin-proxy-haproxy:0.1.1
- image: compumike/hairpin-proxy-haproxy:0.1.2
name: main
resources:
requests:
memory: "100Mi"
cpu: "10m"
limits:
memory: "200Mi"
cpu: "50m"

---

Expand Down Expand Up @@ -144,5 +151,12 @@ spec:
runAsUser: 405
runAsGroup: 65533
containers:
- image: compumike/hairpin-proxy-controller:0.1.1
- image: compumike/hairpin-proxy-controller:0.1.2
name: main
resources:
requests:
memory: "50Mi"
cpu: "10m"
limits:
memory: "100Mi"
cpu: "50m"
77 changes: 44 additions & 33 deletions hairpin-proxy-controller/src/main.rb
Original file line number Diff line number Diff line change
@@ -1,54 +1,65 @@
#!/usr/bin/env ruby
# frozen_string_literal: true

STDOUT.sync = true

require "k8s-client"
require "logger"

def ingress_hosts(k8s)
all_ingresses = k8s.api("extensions/v1beta1").resource("ingresses").list

all_tls_blocks = all_ingresses.map { |r| r.spec.tls }.flatten.compact
class HairpinProxyController
COREDNS_CONFIGMAP_LINE_SUFFIX = "# Added by hairpin-proxy"
DNS_REWRITE_DESTINATION = "hairpin-proxy.hairpin-proxy.svc.cluster.local"
POLL_INTERVAL = ENV.fetch("POLL_INTERVAL", "15").to_i.clamp(1..)

all_tls_blocks.map(&:hosts).flatten.compact.sort.uniq
end
def initialize
@k8s = K8s::Client.in_cluster_config

def rewrite_coredns_corefile(cf, hosts)
cflines = cf.strip.split("\n").reject { |line| line.strip.end_with?("# Added by hairpin-proxy") }
STDOUT.sync = true
@log = Logger.new(STDOUT)
end

main_server_line = cflines.index { |line| line.strip.start_with?(".:53 {") }
raise "Can't find main server line! '.:53 {' in Corefile" if main_server_line.nil?
def fetch_ingress_hosts
# Return a sorted Array of all unique hostnames mentioned in Ingress spec.tls.hosts blocks, in all namespaces.
all_ingresses = @k8s.api("extensions/v1beta1").resource("ingresses").list
all_tls_blocks = all_ingresses.map { |r| r.spec.tls }.flatten.compact
all_tls_blocks.map(&:hosts).flatten.compact.sort.uniq
end

rewrite_lines = hosts.map { |host| " rewrite name #{host} hairpin-proxy.hairpin-proxy.svc.cluster.local # Added by hairpin-proxy" }
def coredns_corefile_with_rewrite_rules(original_corefile, hosts)
# Return a String representing the original CoreDNS Corefile, modified to include rewrite rules for each of *hosts.
# This is an idempotent transformation because our rewrites are labeled with COREDNS_CONFIGMAP_LINE_SUFFIX.

cflines.insert(main_server_line + 1, *rewrite_lines)
# Extract base configuration, without our hairpin-proxy rewrites
cflines = original_corefile.strip.split("\n").reject { |line| line.strip.end_with?(COREDNS_CONFIGMAP_LINE_SUFFIX) }

cflines.join("\n")
end
# Create rewrite rules
rewrite_lines = hosts.map { |host| " rewrite name #{host} #{DNS_REWRITE_DESTINATION} #{COREDNS_CONFIGMAP_LINE_SUFFIX}" }

def main
client = K8s::Client.in_cluster_config
# Inject at the start of the main ".:53 { ... }" configuration block
main_server_line = cflines.index { |line| line.strip.start_with?(".:53 {") }
raise "Can't find main server line! '.:53 {' in Corefile" if main_server_line.nil?
cflines.insert(main_server_line + 1, *rewrite_lines)

loop do
puts "#{Time.now}: Fetching..."
cflines.join("\n")
end

hosts = ingress_hosts(client)
cm = client.api.resource("configmaps", namespace: "kube-system").get("coredns")
def main_loop
@log.info("Starting main_loop with #{POLL_INTERVAL}s polling interval.")
loop do
@log.info("Polling all Ingress resources and CoreDNS configuration...")
hosts = fetch_ingress_hosts
cm = @k8s.api.resource("configmaps", namespace: "kube-system").get("coredns")

old_corefile = cm.data.Corefile
new_corefile = rewrite_coredns_corefile(old_corefile, hosts)
old_corefile = cm.data.Corefile
new_corefile = coredns_corefile_with_rewrite_rules(old_corefile, hosts)

if old_corefile.strip != new_corefile.strip
puts "#{Time.now}: Corefile changed!"
puts new_corefile
if old_corefile.strip != new_corefile.strip
@log.info("Corefile has changed! New contents:\n#{new_corefile}\nSending updated ConfigMap to Kubernetes API server...")
cm.data.Corefile = new_corefile
@k8s.api.resource("configmaps", namespace: "kube-system").update_resource(cm)
end

puts "#{Time.now}: Updating ConfigMap."
cm.data.Corefile = new_corefile
client.api.resource("configmaps", namespace: "kube-system").update_resource(cm)
sleep(POLL_INTERVAL)
end

sleep(15)
end
end

main
HairpinProxyController.new.main_loop if $PROGRAM_NAME == __FILE__

0 comments on commit 1d4a2b4

Please sign in to comment.