This repository contains two main sections:
- test-partner: Partner debug pods definition for use on a k8s CNF Certification cluster. Used to run platform and networking tests.
- test-target: A trivial example CNF (including a replicaset/deployment, a CRD and an operator), primarily intended to be used to run test-network-function test suites on a development machine.
Together, they make up the basic infrastructure required for "testing the tester". The partner debug pod is always required for platform tests and networking tests.
- Pod Under Test (PUT): The Vendor Pod, usually provided by a CNF Partner.
- Operator Under Test (OT): The Vendor Operator, usually provided by a CNF Partner.
- Debug Pod (DP): A Pod running a UBI8-based support image deployed as part of a daemon set for accessing node information. DPs is deployed in "default" namespace
- CRD Under Test (CRD): A basic CustomResourceDefinition.
By default, DP are deployed in "default" namespace. all the other deployment files in this repository use tnf
as default namespace. A specific namespace can be configured using:
export TNF_EXAMPLE_CNF_NAMESPACE="tnf" #tnf for example
By default debug pods are installed on demand when the tnf test suite is deployed. To deploy debug pods on all nodes in the cluster, configure the following environment variable:
export ON_DEMAND_DEBUG_PODS=false
The repository can be cloned to local machine using:
git clone git@github.com:test-network-function/cnf-certification-test-partner.git
Although any CNF Certification results should be generated using a proper CNF Certification cluster, there are times in which using a local emulator can greatly help with test development. As such, test-target provides a simple PUT, OT, CRD, which satisfies the minimal requirements to perform test cases. These can be used in conjunction with a local kind cluster to perform local test development.
In order to run the local test setup, the following dependencies are needed:
Install the latest docker version ( https://docs.docker.com/engine/install/fedora ):
sudo dnf config-manager \
--add-repo \
https://download.docker.com/linux/fedora/docker-ce.repo
sudo dnf remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
sudo dnf -y install dnf-plugins-core
sudo dnf install docker-ce docker-ce-cli containerd.io
Perform the post install ( https://docs.docker.com/engine/install/linux-postinstall ):
sudo systemctl start docker.service
sudo systemctl enable docker.service
sudo systemctl enable containerd.service
sudo groupadd docker
sudo usermod -aG docker $USER
newgrp docker
Configure IPv6 in docker ( https://docs.docker.com/config/daemon/ipv6/ ):
# update docker config
sudo bash -c 'cat <<- EOF > /etc/docker/daemon.json
{
"ipv6": true,
"fixed-cidr-v6": "2001:db8:1::/64"
}
EOF'
Enable IPv6 with:
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=0
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=0
sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=0
to persist IPv6 support, edit or add the following lines in the /etc/sysctl.conf file
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
disable firewall, if present, as multus interfaces will not be able to communicate.
Note: if docker is already running after running the command below, also restart docker as taking the firewall down will remove the docker rules:
sudo systemctl stop firewalld
restart docker:
sudo systemctl restart docker
Download and install Kubernetes In Docker (Kind):
curl -Lo kind https://github.com/kubernetes-sigs/kind/releases/download/v0.23.0/kind-linux-amd64
Configure a cluster with 4 worker nodes and one master node ( dual stack ):
cat <<- EOF > config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
ipFamily: dual
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
EOF
kind create cluster --config=config.yaml
Increase max files limit to prevent issue due to the large cluster size ( see https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files ):
sudo sysctl fs.inotify.max_user_watches=524288
sudo sysctl fs.inotify.max_user_instances=512
To make the changes persistent, edit the file /etc/sysctl.conf and add these lines:
fs.inotify.max_user_watches = 524288
fs.inotify.max_user_instances = 512
fs.inotify.max_user_watches = 524288
fs.inotify.max_user_instances = 512
To create the resources, issue the following command:
make install
This will create a PUT named "test" in TNF_EXAMPLE_CNF_NAMESPACE
namespace and Debug Daemonset named "debug". The
example tnf_config.yml
in test-network-function
will use this local infrastructure by default.
Note that this command also creates OT and CRD resources.
To verify test
pods are running:
oc get pods -n $TNF_EXAMPLE_CNF_NAMESPACE -o wide
You should see something like this (note that the 2 test pods are running on different nodes due to a anti-affinity rule):
$ oc get pods -ntnf -owide
NAME READY STATUS RESTARTS AGE
hazelcast-platform-controller-manager-6bbc968f9-fmmbs 1/1 Running 0 3m19s
test-0 1/1 Running 0 84m
test-1 1/1 Running 0 83m
test-66f77bd94-2w4l8 1/1 Running 0 85m
test-66f77bd94-6kd6j 1/1 Running 0 85m
To tear down the local test infrastructure from the cluster, use the following command. It may take some time to completely stop the PUT, CRD, OT, and DP:
make clean
Install vagrant for your platform:
https://www.vagrantup.com/downloads
To build the environment, including deploying the test cnf, do the following:
make vagrant-build
The kubeconfig for the new environment will override the file located at ~/.kube/config Just start running commands from the command line to test the new cluster:
oc get pods -A
To destroy the vagrant environment, do the following:
make vagrant-destroy
To access the virtual machine supporting the cluster, do the following:
cd config/vagrant
user@fedora vagrant]$ vagrant ssh
[vagrant@k8shost ~]$
The partner repo scripts are located in ~/partner
brew install kind podman qemu
kind create cluster
export KIND_EXPERIMENTAL_PROVIDER=podman
git clone git@github.com:test-network-function/cnf-certification-test-partner.git &&
cd cnf-certification-test-partner &&
make rebuild-cluster; make install
CNF Certification Test Partner is copyright Red Hat, Inc. and available under an Apache 2 license.