Skip to content

Reference Switch Lite

Yuta edited this page Sep 24, 2021 · 1 revision

Name

reference_switch_lite

Location

hw/projects/reference_switch_lite

IP Cores

Description

The division of the hardware into modules was hinted at in the previous section. Understanding these modules is essential in making the most of the available designs. The reference projects in NetFPGA platform, including the Switch Lite, all follow the same modular structure. This design is a pipeline where each stage is a separate module.

Every incoming packet is annotated with metadata and is finally transformed into 1024-bit AXI4-Stream. The TX side follows the exact same path but in the opposite direction. The input arbiter has four input interfaces: two from the CMAC IP and one from a DMA module (to be described later on). Each input to the arbiter connects to an input queue. The simple arbiter rotates between all the input queues in a round robin manner, each time selecting a non-empty queue and writing one full packet from it to the next stage in the data-path, which is the output port lookup module.

The output port lookup module is responsible for deciding which port a packet goes out of. After that decision is made, the packet is then handed to the output queues module. The lookup module implements a simple learning CAM, implemented using registers. Packets with unknown destination MAC address are broadcasted.

Once a packet arrives to the output_queues module, it already has a marked destination (provided on a side channel - The TUSER field). According to the destination it is entered to a dedicated output queue. There are five such output queues: one per each 10G port and one to the DMA block. Note that a packet may be dropped if its output queue is full or almost full. When a packet reaches the head of its output queue, it is sent to the corresponding output port, being either CMAC IP or the DMA module. The output queues are arranged in an interleaved order: one physical Ethernet port, one DMA port etc. Even queues are therefore assigned to physical Ethernet ports, and odd queues are assigned to the virtual DMA ports.

The open-nic-shell module serves as a DMA engine for the reference switch design. It includes Xilinx' PCIe core, a DMA engine and AXI4 Interconnect module. To the other NetFPGA modules it exposes AXIS (master+slave) interfaces for sending/receiving packets, as well as a AXI4-LITE master interface through which all AXI registers can be accessed from the host (over PCIe). To this end it connects to the axi_interconnect module.

Testing

  1. Make sure you clone the latest version of the NetFPGA package. Please ensure that you have the necessary packages installed. The current testing infrastructure is Python based.
git clone https://github.com/NetFPGA/NetFPGA-PLUS.git
  1. Make sure to update the following environment variables in the file {user-path}/NetFPGA-PLUS/tools/settings.sh
  • PLUS_FOLDER
  • NF_PROJECT_NAME
  • DEVICE

To set the environment variables, source both relevant setting files:

source {user-path}/NetFPGA-PLUS/tools/settings.sh
source $XILINX_PATH/settings64.sh
  1. Compile the library of IP cores. (It is unnecessary to compile the library every time for a new project unless you have made any changes to the IP cores.)
# cd $PLUS_FOLDER/hw
# make
  1. Program the FPGA
  • If you want to run the Hardware tests with the pre-existing bitfile provided in the base repo:
cd $NF_DESIGN_DIR/bitfiles
# xsdb

On the xsct console, use connect and then fpga -f reference_switch_lite_<BOARD>.bit to program the FPGA with the bitfile and exit to close xsdb console. Reboot the machine.

  • If you want to create your own bitfile and run the Hardware tests:
cd $NF_DESIGN_DIR
# make
# cd bitfiles
# xsdb

On the xsct console, use connect and then fpga -f reference_switch_lite_<BOARD>.bit to program the FPGA with the bitfile and exit to close xsdb console. Reboot the machine.

  1. Check if the bitfile is loaded using the following command.
lspci –vxx | grep Xilinx

If the host machine doesn't detect the Xilinx device, you need to reprogram the FPGA and reboot as mentioned in the previous step.

  1. Build the driver for the NetFPGA PLUS.
# cd sw/driver/
# make 
# sudo insmod onic.ko

Then run ip a to check if you are able to see the 'nfX' interfaces.

  1. Running the test

The top level file nf_test.py can be found inside NetFPGA-PLUS/tools/scripts. Tests are run using the nf_test.py command followed by the arguments indicating if it is a hardware or simulation test and what is the specific test that we would like to run. So when running the test, test mode should be specified (sim or hw). For instance:

#./nf_test.py sim --major learning --minor sw 

or

# sudo -E env PYTHONPATH=`echo $PYTHONPATH` zsh -c 'source $XILINX_PATH/settings64.sh && ./nf_test.py hw --major simple --minor broadcast '

where -E passes set environment variables such as NF_DESIGN_DIR to sudo. Similarly env PYTHONPATH and source setting64.sh pass $PYTHONPATH (need by NFTest library) and Vivado paths to sudo. sudo is required to open a socket and to send packets through a device under test.

For a complete list of arguments type ./nf_test.py --help.

You can find more information related to hardware and simulation tests here:

The test infrastructure is based on the python. You can find the tests inside the hw/projects/{project_name}/test folder.

Testing hardware using two or more machines

To run the test, you need two machines, A and B. Let's say Machine A is equipped with NetFPGA and Machine B is equipped with a third-party 100G dual-port NIC.

Download the reference_switch_lite bitfile from hw/projects/reference_switch_lite/bitfiles/reference_switch_lite_.bit.

Connect Machine A and Machine B using two 100G cables. Assume we use nf0 and nf1 on Machine A and eth1 and eth2 on Machine B. Generate packets from eth1 with given MAC addresses (let's say, dest-MAC = x and src-MAC = y). Check that all the NetFPGA physical ports send back the packet. Then, generate from eth2 packets with dest-MAC = y and src-MAC = x. You will be able to see that only nf0 will forward back packets.