Skip to content

Commit

Permalink
add post updating cilium integration with opnsense (#4)
Browse files Browse the repository at this point in the history
* initial commit of blank post

* add new post

* add link to updated post

* update tags
  • Loading branch information
glitchcrab authored Jan 14, 2024
1 parent 28717de commit cc51add
Show file tree
Hide file tree
Showing 2 changed files with 135 additions and 0 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
---
title: "Update: Using BGP to integrate Cilium with OPNsense"
date: "2024-01-14"
description: "An update to integrating Cilium and OPNsense using BGP"
categories:
- "kubernetes"
- "networking"
- "bgp"
tags:
- "bgp"
- "containers"
- "cilium"
- "cni"
- "homelab"
- "kubernetes"
- "networking"
---

A little while back, I wrote a [short piece on integrating Cilium with OPNsense using BGP](/posts/using-bgp-to-integrate-cilium-with-opnsense/). With more recent releases of Cilium, the team have introduced the [Cilium BGP Control Plane](https://docs.cilium.io/en/stable/network/bgp-control-plane/) (currently as a beta feature). This reworking of the BGP integration replaces the old MetalLB-based control plane and as such the older feature must first be disabled. To enable the new feature, you can either pass an argument to Cilium:

```shell
--enable-bgp-control-plane=true
```

Or if you use Helm to install Cilium then the following values are required:

```yaml
bgpControlPlane:
enabled: true
```
Note that the OPNsense configuration described in the [previous post](/posts/using-bgp-to-integrate-cilium-with-opnsense/) is unchanged; it is only the configuration of Cilium which differs.
## BGP configuration
Configuration of the BGP feature is somewhat different to the single `bgp-config` ConfigMap used by the old MetalLB integration. Below is an example of the old configuration which we'll translate into the new Custom Resources provided by Cilium:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: bgp-config
namespace: cilium
data:
config.yaml: |
peers:
- peer-address: 172.25.0.1
peer-asn: 65401
my-asn: 65502
address-pools:
- name: default
protocol: bgp
addresses:
- 172.27.0.32-172.27.0.62
```

### CiliumLoadBalancerIPPool

The address pool is now configured by way of the new `CiliumLoadBalancerIPPool` Custom Resource. This CR separates out the address configuration from the BGP router configuration:

```yaml
apiVersion: cilium.io/v2alpha1
kind: CiliumLoadBalancerIPPool
metadata:
name: service-pool
spec:
cidrs:
- cidr: 172.27.0.32/27
```

### CiliumBGPPeeringPolicy

The `CiliumBGPPeeringPolicy` CR is a little more complex, however most of the `neighbors` section is just the defaults and so they can be ignored:

```yaml
apiVersion: cilium.io/v2alpha1
kind: CiliumBGPPeeringPolicy
metadata:
name: bgp-peering-policy-worker
spec:
nodeSelector:
matchLabels:
metal.sidero.dev/serverclass: worker
virtualRouters:
- localASN: 65502
serviceSelector:
matchExpressions:
- {key: "io.cilium/bgp-announce", operator: In, values: ['worker']}
neighbors:
- peerAddress: '172.25.0.1/32'
peerASN: 65401
eBGPMultihopTTL: 10
connectRetryTimeSeconds: 120
holdTimeSeconds: 90
keepAliveTimeSeconds: 30
gracefulRestart:
enabled: true
restartTimeSeconds: 120
```

Some sections of the peering policy require a little more explanation.

Firstly, I only have a single neighbor defined as my OPNsense firewall is the only peer in my network. This is easily translated over from the old configuration, along with the local ASN for the cluster:

```yaml
virtualRouters:
- localASN: 65502
neighbors:
- peerAddress: '172.25.0.1/32'
peerASN: 65401
```

The new peering policy resource allows far more flexibility for managing BGP topologies. In my case, I only want to allow worker nodes to announce routes, so I've configured the peering policy to only apply to worker nodes. This is achieved by applying a label to those nodes which is then used in the policy:

```yaml
nodeSelector:
matchLabels:
metal.sidero.dev/serverclass: worker
```

The final config option to note is the `serviceSelector`; I do not want every Service IP to be announced so I ensure that Cilium only announces those which I want it to. In order to achieve this I label any Services which should be announced with `io.cilium/bgp-announce=worker`:

```yaml
serviceSelector:
matchExpressions:
- {key: "io.cilium/bgp-announce", operator: In, values: ['worker']}
```

I chose this scheme because I also have a peering policy which only applies to control plane nodes.

### BGP topologies

Having only one upstream peer, my setup is quite simple. It is possible to create more complex topologies; for these I suggest [referring to the documentation](https://docs.cilium.io/en/stable/network/bgp-control-plane/#creating-a-bgp-topology).
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ categories:
- "uncategorised"
---

**Update**: Cilium now has a new BGP integration so things have changed a little; see [this post](/posts/update-using-bgp-to-integrate-cilium-with-opnsense/) for more details.

If (like me) you happen to follow the development of the [Cilium](https://cilium.io) CNI plugin for Kubernetes then you'll have seen the recent `1.10` release which included _many_ shiny features. One exciting addition is the ability to announce Service IPs via BGP.

Running Kubernetes in a homelab environment quickly highlights that there are some aspects which are a little lacking when compared to the integration you get from the cloud provider offerings. One of the biggest limitations is the inability to create `Loadbalancer` services to expose ingress controllers and the like. [MetalLB](https://metallb.universe.tf/) has been around for some years now and its aim is to improve this situation by using either ARP or BGP to announce routes to Service IPs inside your cluster(s). This means that you can create `Loadbalancer` Services inside your on-prem (or homelab) network, and it goes a long way towards reducing the friction of running Kubernetes outside of the cloud providers.
Expand Down

0 comments on commit cc51add

Please sign in to comment.