Skip to content

Commit

Permalink
Initial commit
Browse files Browse the repository at this point in the history
  • Loading branch information
compumike committed Oct 23, 2020
0 parents commit 1d13489
Show file tree
Hide file tree
Showing 15 changed files with 405 additions and 0 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
*~
21 changes: 21 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
The MIT License

Copyright (c) 2020 Michael F. Robbins.

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
51 changes: 51 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# hairpin-proxy

PROXY protocol support for internal-to-LoadBalancer traffic for Kubernetes Ingress users.

If you've had problems with ingress-nginx, cert-manager, LetsEncrypt ACME HTTP01 self-check failures, and the PROXY protocol, read on.

## The PROXY Protocol

If you run a service behind a load balancer, your downstream server will see all connections as originating from the load balancer's IP address. The user's source IP address will be lost and will not be visible to your server. To solve this, the [PROXY protocol](http://www.haproxy.org/download/1.8/doc/proxy-protocol.txt) preserves source addresses on proxied TCP connections by having the load balancer prepend a simple string such as "PROXY TCP4 255.255.255.255 255.255.255.255 65535 65535\r\n" at the beginning of the downstream TCP connection.

Because this injects data at the application-level, the PROXY protocol must be supported on both ends of the connection. Fortunately, this is widely supported already:

- Load balancers such as [AWS ELB](https://aws.amazon.com/blogs/aws/elastic-load-balancing-adds-support-for-proxy-protocol/), [AWS NLB](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#proxy-protocol), [DigitalOcean Load Balancers](https://www.digitalocean.com/blog/load-balancers-now-support-proxy-protocol/), [GCP Cloud Load Balancing](https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp#proxy-protocol), and [Linode NodeBalancers](https://www.linode.com/docs/guides/nodebalancer-proxypass-configuration/) support adding the PROXY protocol line to their downstream TCP connections.
- Web servers such as [Apache](https://httpd.apache.org/docs/2.4/mod/mod_remoteip.html#remoteipproxyprotocol), [Caddy](https://github.com/caddyserver/caddy/pull/1349), [Lighttpd](https://redmine.lighttpd.net/projects/lighttpd/wiki/Docs_ModExtForward), and [NGINX](https://docs.nginx.com/nginx/admin-guide/load-balancer/using-proxy-protocol/) support receiving the PROXY protocol line use the passed source IP for access logging and passing it to the application server with an `X-Forwarded-For` HTTP header, where it can be accessed by your backend.

If you configure both your load balancer and web server to send/accept the PROXY protocol, everything just works! Until...

## The Problem

In this case, Kubernetes networking is too smart for its own good. [See upstream Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/66607)

An ingress controller service deploys a LoadBalancer, which is provisioned by your cloud provider. Kubernetes notices the LoadBalancer's external IP address. As an "optimization", kube-proxy on each node writes iptables rules that rewrite all outbound traffic to the LoadBalancer's external IP address to instead be redirected to the cluster-internal Service ClusterIP address. If your cloud load balancer doesn't modify the traffic, then indeed this is a helpful optimization.

However, when you have the PROXY protocol enabled, the external load balancer _does_ modify the traffic, prepending the PROXY line before each TCP connection. If you connect directly to the web server internally, bypassing the external load balancer, then it will receive traffic _without_ the PROXY line. In the case of ingress-nginx with `use-proxy-protocol: "true"`, you'll find that NGINX fails when receiving a bare GET request. As a result, accessing http://your-site/ from inside the cluster fails!

This is particularly a problem when using cert-manager for provisioning SSL certificates. Cert-manager uses HTTP01 validation, and before asking LetsEncrypt to hit http://your-site/some-special-url, it tries to access this URL itself as a self-check. This fails. Cert-manager does not allow you to skip the self-check. As a result, your certificate is never provisioned, even though the verification URL would be perfectly accessible externally. See upstream cert-manager issues: [proxy_protocol mode breaks HTTP01 challenge Check stage](https://github.com/jetstack/cert-manager/issues/466), [http-01 self check failed for domain](https://github.com/jetstack/cert-manager/issues/656), [Self check always fail](https://github.com/jetstack/cert-manager/issues/863)

## Possible Solutions

There are several ways to solve this problem:

- Modify Kubernetes to not rewrite the external IP address of a LoadBalancer.
- Modify nginx to treat the PROXY line as optional.
- Modify cert-manager to add the PROXY line on its self-check.
- Modify cert-manager to bypass the self-check.

None of these are particularly easy without modifying upstream packages, and the upstream maintainers don't seem eager to address the reported issues linked above.

## The hairpin-proxy Solution

1. hairpin-proxy intercepts and modifies cluster-internal DNS lookups for hostnames that are served by your ingress controller, pointing them to the IP of an internal `hairpin-proxy-haproxy` service instead. (This is managed by `hairpin-proxy-controller`, which simply watches the Kubernetes API for new/modified Ingress resources and updates the CoreDNS ConfigMap when necessary.)
2. The internal `hairpin-proxy-haproxy` service runs a minimal HAProxy instance which is configured to append the PROXY line and forward the traffic on to the internal ingress controller.

As a result, when pod in your cluster (such as cert-manager) try to access http://your-site/, they resolve to the hairpin-proxy, which adds the PROXY line and sends it to your `ingress-nginx`. The NGINX parses the PROXY protocol just as it would if it had come from an external load balancer, so it sees a valid request and handles it identically to external requests.

## Deployment

```shell
kubectl apply -f deploy.yml
```
Coming soon.
4 changes: 4 additions & 0 deletions build_and_push
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
#!/bin/sh
set -e
(cd ./hairpin-proxy-haproxy && ./build_and_push)
(cd ./hairpin-proxy-controller && ./build_and_push)
151 changes: 151 additions & 0 deletions deploy.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@
apiVersion: v1
kind: Namespace
metadata:
name: hairpin-proxy

---

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hairpin-proxy-haproxy
name: hairpin-proxy-haproxy
namespace: hairpin-proxy
spec:
replicas: 1
selector:
matchLabels:
app: hairpin-proxy-haproxy
template:
metadata:
labels:
app: hairpin-proxy-haproxy
spec:
containers:
- image: compumike/hairpin-proxy-haproxy:latest
imagePullPolicy: Always
name: main

---

apiVersion: v1
kind: Service
metadata:
name: hairpin-proxy
namespace: hairpin-proxy
spec:
selector:
app: hairpin-proxy-haproxy
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443

---

kind: ServiceAccount
apiVersion: v1
metadata:
name: hairpin-proxy-controller-sa
namespace: hairpin-proxy

---

apiVersion: v1
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: hairpin-proxy-controller-cr
rules:
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: hairpin-proxy-controller-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: hairpin-proxy-controller-cr
subjects:
- kind: ServiceAccount
name: hairpin-proxy-controller-sa
namespace: hairpin-proxy

---

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: hairpin-proxy-controller-r
namespace: kube-system
rules:
- apiGroups: [""]
resources:
- configmaps
resourceNames:
- coredns
verbs:
- get
- watch
- update

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: hairpin-proxy-controller-rb
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: hairpin-proxy-controller-r
subjects:
- kind: ServiceAccount
name: hairpin-proxy-controller-sa
namespace: hairpin-proxy

---

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hairpin-proxy-controller
name: hairpin-proxy-controller
namespace: hairpin-proxy
spec:
replicas: 1
selector:
matchLabels:
app: hairpin-proxy-controller
template:
metadata:
labels:
app: hairpin-proxy-controller
spec:
serviceAccountName: hairpin-proxy-controller-sa
securityContext:
runAsUser: 405
runAsGroup: 65533
containers:
- image: compumike/hairpin-proxy-controller:latest
imagePullPolicy: Always
name: main
1 change: 1 addition & 0 deletions hairpin-proxy-controller/.dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
**/*~
24 changes: 24 additions & 0 deletions hairpin-proxy-controller/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
FROM ruby:2.7.2-alpine AS build

ENV GEM_HOME="/usr/local/bundle"
ENV PATH $GEM_HOME/bin:$GEM_HOME/gems/bin:$PATH
RUN echo "gem: --no-document" > ~/.gemrc

RUN apk add --no-cache \
build-base \
ruby-dev

COPY Gemfile Gemfile.lock /app/
WORKDIR /app/
RUN bundle install --jobs 16 --retry 5

# Comment out these four lines for development convenience.
# Uncomment for a smaller container image.
FROM ruby:2.7.2-alpine AS run
COPY --from=build $GEM_HOME $GEM_HOME
COPY Gemfile Gemfile.lock /app/
WORKDIR /app/

COPY ./src/ /app/src/

CMD ["/app/src/main.rb"]
7 changes: 7 additions & 0 deletions hairpin-proxy-controller/Gemfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# frozen_string_literal: true

source "https://rubygems.org"

git_source(:github) { |repo_name| "https://github.com/#{repo_name}" }

gem "k8s-client", "~> 0.10.4"
60 changes: 60 additions & 0 deletions hairpin-proxy-controller/Gemfile.lock
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
GEM
remote: https://rubygems.org/
specs:
concurrent-ruby (1.1.7)
dry-configurable (0.11.6)
concurrent-ruby (~> 1.0)
dry-core (~> 0.4, >= 0.4.7)
dry-equalizer (~> 0.2)
dry-container (0.7.2)
concurrent-ruby (~> 1.0)
dry-configurable (~> 0.1, >= 0.1.3)
dry-core (0.4.9)
concurrent-ruby (~> 1.0)
dry-equalizer (0.3.0)
dry-inflector (0.2.0)
dry-logic (0.6.1)
concurrent-ruby (~> 1.0)
dry-core (~> 0.2)
dry-equalizer (~> 0.2)
dry-struct (0.5.1)
dry-core (~> 0.4, >= 0.4.3)
dry-equalizer (~> 0.2)
dry-types (~> 0.13)
ice_nine (~> 0.11)
dry-types (0.13.4)
concurrent-ruby (~> 1.0)
dry-container (~> 0.3)
dry-core (~> 0.4, >= 0.4.4)
dry-equalizer (~> 0.2)
dry-inflector (~> 0.1, >= 0.1.2)
dry-logic (~> 0.4, >= 0.4.2)
excon (0.78.0)
hashdiff (1.0.1)
ice_nine (0.11.2)
jsonpath (0.9.9)
multi_json
to_regexp (~> 0.2.1)
k8s-client (0.10.4)
dry-struct (~> 0.5.0)
dry-types (~> 0.13.0)
excon (~> 0.66)
hashdiff (~> 1.0.0)
jsonpath (~> 0.9.5)
recursive-open-struct (~> 1.1.0)
yajl-ruby (~> 1.4.0)
yaml-safe_load_stream (~> 0.1)
multi_json (1.15.0)
recursive-open-struct (1.1.3)
to_regexp (0.2.1)
yajl-ruby (1.4.1)
yaml-safe_load_stream (0.1.1)

PLATFORMS
ruby

DEPENDENCIES
k8s-client

BUNDLED WITH
2.1.4
4 changes: 4 additions & 0 deletions hairpin-proxy-controller/build_and_push
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
#!/bin/sh
set -e
docker build -t compumike/hairpin-proxy-controller .
docker push compumike/hairpin-proxy-controller
50 changes: 50 additions & 0 deletions hairpin-proxy-controller/src/main.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
#!/usr/bin/env ruby
# frozen_string_literal: true

STDOUT.sync = true

require "k8s-client"

def ingress_hosts(k8s)
k8s.api("extensions/v1beta1").resource("ingresses").list.map { |r| r.spec.tls }.flatten.map(&:hosts).flatten.sort.uniq
end

def rewrite_coredns_corefile(cf, hosts)
cflines = cf.strip.split("\n").reject { |line| line.strip.end_with?("# Added by hairpin-proxy") }

main_server_line = cflines.index { |line| line.strip.start_with?(".:53 {") }
raise "Can't find main server line! '.:53 {' in Corefile" if main_server_line.nil?

rewrite_lines = hosts.map { |host| " rewrite name #{host} hairpin-proxy.hairpin-proxy.svc.cluster.local # Added by hairpin-proxy" }

cflines.insert(main_server_line + 1, *rewrite_lines)

cflines.join("\n")
end

def main
client = K8s::Client.in_cluster_config

loop do
puts "#{Time.now}: Fetching..."

hosts = ingress_hosts(client)
cm = client.api.resource("configmaps", namespace: "kube-system").get("coredns")

old_corefile = cm.data.Corefile
new_corefile = rewrite_coredns_corefile(old_corefile, hosts)

if old_corefile.strip != new_corefile.strip
puts "#{Time.now}: Corefile changed!"
puts new_corefile

puts "#{Time.now}: Updating ConfigMap."
cm.data.Corefile = new_corefile
client.api.resource("configmaps", namespace: "kube-system").update_resource(cm)
end

sleep(15)
end
end

main
1 change: 1 addition & 0 deletions hairpin-proxy-haproxy/.dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
**/*~
10 changes: 10 additions & 0 deletions hairpin-proxy-haproxy/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
FROM haproxy:2.2-alpine

EXPOSE 80
EXPOSE 443

# By default, forward all HTTP requests to an ingress-nginx.
# If using a different ingress controller, you may override TARGET_SERVER at runtime.
ENV TARGET_SERVER=ingress-nginx-controller.ingress-nginx.svc.cluster.local

COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
Loading

0 comments on commit 1d13489

Please sign in to comment.