Let's create another egress gateway.
-
Create the
IPPool's
for the egress gatewaygreen
.kubectl apply -f - <<EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: egress-green-1a spec: cidr: 192.168.0.84/31 allowedUses: ["Workload"] awsSubnetID: $SUBNETPUBEGW1AID blockSize: 32 nodeSelector: "!all()" disableBGPExport: true --- apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: egress-green-1b spec: cidr: 192.168.0.116/31 allowedUses: ["Workload"] awsSubnetID: $SUBNETPUBEGW1BID blockSize: 32 nodeSelector: "!all()" disableBGPExport: true EOF
Verify the created
IPPOOL's
kubectl get ippools -o=custom-columns='NAME:.metadata.name,CIDR:.spec.cidr'
-
Create the egress gateway
green
. This egress gateway has on its annotation, the IP addresses of the Elastic IPs previously created. Thus the egress gateway will use the Elastic IPs addresses when sending traffic to the Internet.kubectl apply -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: egress-gateway-green namespace: default labels: egress-code: green spec: replicas: 2 selector: matchLabels: egress-code: green template: metadata: annotations: cni.projectcalico.org/ipv4pools: '["egress-green-1a","egress-green-1b"]' cni.projectcalico.org/awsElasticIPs: '["$EIPADDRESS1", "$EIPADDRESS2"]' labels: egress-code: green spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: egress-code: green imagePullSecrets: - name: tigera-pull-secret nodeSelector: kubernetes.io/os: linux containers: - name: egress-gateway image: quay.io/tigera/egress-gateway:v3.14.1 env: - name: EGRESS_POD_IP valueFrom: fieldRef: fieldPath: status.podIP securityContext: privileged: true volumeMounts: - mountPath: /var/run name: policysync resources: requests: projectcalico.org/aws-secondary-ipv4: 1 limits: projectcalico.org/aws-secondary-ipv4: 1 terminationGracePeriodSeconds: 0 volumes: - flexVolume: driver: nodeagent/uds name: policysync EOF
Verify the pods created for the egress gateway
green
.kubectl get pods --output=custom-columns='NAME:.metadata.name,IP ADDRESS:.status.podIP'
-
Create the pod
netshoot-browser
.kubectl create -f - <<EOF apiVersion: v1 kind: Pod metadata: name: netshoot-browser labels: app: netshoot spec: containers: - image: nicolaka/netshoot:latest name: netshoot command: ["/bin/bash"] args: ["-c", "while true; do ping localhost; sleep 60; done"] EOF
Create test traffic to the internet from the pod netshoot-browser
without using the egress gateway.
-
From the second terminal, execute the shell in the pod
netshoot-browser
.kubectl exec -it netshoot-browser -- /bin/bash
-
Use the curl command to access the ipconfig.io webserver. The ipconfig.io will respond your HTTP GET with your public IP address.
curl ipconfig.io
The response will be some public IP address mapped from AWS to route traffic to your node. It's ephemeral and can change at any moment.
-
On the third terminal, create the following annotations to the namespace
app-test
.kubectl annotate pods netshoot-browser egress.projectcalico.org/selector="egress-code == 'green'"
-
Use the curl command to access the ipconfig.io webserver. The ipconfig.io will respond your HTTP GET with your public IP address.
curl ipconfig.io
The response will be the Elastic IP address allocated to your egress gateway. It's NOT ephemeral and will not change unless you release it.
You can stop the pod of using the egress gateway by removing the annotation previously done.
kubectl annotate pods netshoot-browser egress.projectcalico.org/selector-
⬅️ Module 8 - Deploy Egress Gateway for a per namespace selector ↩️ Back to Main