Use DPDK to send packets between platforms

This guide describes how to send packets between two platforms.

Overview

Figure 1 shows how to send packets between two platforms in a simple configuration. The example uses the Data Plane Development Kit, which is a set of libraries, drivers, sample applications, and tools for fast packet processing.

Platform A and B

Figure 1: Environment for l3fwd DPDK application

This example uses the following DPDK components:

  • pktgen: Traffic generator. See pktgen documentation for details.

  • l3fwd: Layer 3 forwarding example application. See l3fwd documentation for details.

Prerequisites

  • Two platforms using Clear Linux* OS release 31130 or higher.

  • Both images must include the kernel-native bundle.

  • Install the following packages:

    sudo swupd bundle-add network-basic-dev dpdk devpkg-dpdk
    
  • Each platform must have at least one NIC. Check the DPDK project for the list of supported dpdk.org NICs.

  • Two network cables.

Install dpdk and build l3fwd example (Platform B)

  1. Change to the l3fwd example directory.

    sudo cd /usr/share/dpdk/examples/l3fwd
    
  2. Assign RTE_SDK variable to the makefiles path.

    sudo export RTE_SDK=/usr/share/dpdk/
    
  3. Assign RTE_TARGET variable to the location of the gcc* config file.

    sudo export RTE_TARGET=x86_64-native-linux-gcc
    
  4. Build the l3fwd application and add the configuration header to the CFLAGS variable.

    sudo make
    

Build pktgen (Platform A)

  1. Download the pktgen tar package v3.1.2 or newer.

  2. Decompress packages and move to uncompressed source directory.

  3. Assign RTE_SDK variable to the path where makefiles are located.

    sudo export RTE_SDK=/usr/share/dpdk/
    
  4. Assign RTE_TARGET to the location of the gcc config file.

    sudo export RTE_TARGET=x86_64-native-linux-gcc
    
  5. Build the pktgen project and set the CONFIG_RTE_BUILD_SHARED_LIB variable to “n”.

    sudo make CONFIG_RTE_BUILD_SHARED_LIB=n
    

Bind NICs to DPDK kernel drivers (Platforms A and B)

The l3fwd application uses two NICs. The DPDK includes tools for binding NICs to DPDK modules to run DPDK applications.

  1. Load the DPDK I/O kernel module.

    sudo modprobe vfio-pci
    
  2. Check the NIC status to determine which network cards are not busy. When another application is using them, the status shows “Active”, and those NICs cannot be bound.

    sudo dpdk-devbind --status
    
  3. Bind two available NICs. The general syntax for binding is: dpdk-devbind --bind=vfio-pci <device-entry>. A working example is shown below:

    sudo dpdk-devbind --bind=vfio-pci 01:00.0
    
  4. Check the NIC status to verify that the NICs are bound correctly. If successful, drv displays the value igb_uio, which confirms that the NICs are using the DPDK modules.

Set hugepages (Platforms A and B)

Clear Linux OS supports hugepages for the large memory pool allocation used for packet buffers.

  1. Set the number of hugepages.

    echo 1024 | sudo tee /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
    
  2. Allocate pages on NUMA machines.

    echo 1024 | sudo tee /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
    echo 1024 | sudo tee /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
    
  3. Make memory available for DPDK.

    sudo mkdir -p /mnt/huge $ mount -t hugetlbfs nodev /mnt/huge
    

    For more information, refer to the DPDK guide System Requirements section.

Set up the physical environment (Platforms A and B)

Connect the NICs on Platform A to the NICs on Platform B using the network cables as shown in figure 2.

../../_images/pyshical_net.png

Figure 2: Physical network environment

Run l3fwd application (Platform B)

The l3fwd application is one of the DPDK examples available when you install the dpdk-dev bundle. l3fwd forwards packets from one NIC to another. For details, refer to the l3fwd documentation.

  1. Open the l3fwd example directory.

    sudo cd  /usr/share/dpdk/examples/l3fwd
    
  2. This step is very important.

    1. DPDK needs poll mode drivers to operate.

    2. Poll mode drivers are shared objects in /usr/lib64.

    3. See the full list of supported NICs at dpdk.org NICs.

    4. You must know which kernel module each NIC is using and choose a poll mode driver that corresponds to your NICs.

  3. NIC binding and pktgen configuration depends upon network use cases and available system resources. Use the -d flag to set the poll mode driver.

    The following example assumes that the NICs use the e1000 network driver and the e1000 poll mode driver. The librte_pmd_e1000.so is located in /usr/lib64 in Clear Linux OS.

    sudo ./build/l3fwd -c 0x3 -n 2 -d librte_pmd_e1000.so -- -p 0x3 --config="(0,0,0),(1,0,1)"
    
  4. The l3fwd application shows port initialization details at startup. After port 0 initialization completes, l3fwd shows a MAC address and information for port 1.

    Save the MAC address for configuring the pktgen project.

Run pktgen application (Platform A)

pktgen is a network traffic generator included in the DPDK.

  1. pktgen configuration depends upon the network setup and the available system resources. The following example shows a basic configuration.

    sudo ./app/app/x86_64-native-linux-gcc/pktgen -c 0xf -n 4 -- -p 0xf -P -m "1.0, 2.1"
    
  2. Enable active colorful output (optional).

    Pktgen> theme enable
    
  3. Use the MAC addresses shown by the l3fwd application during initialization. The command to set the MAC addresses in pktgen has the format:

    set mac <port number> <mac address>
    

    Here is a working example:

    Pktgen> set mac 0 00:1E:67:CB:E8:C9
    Pktgen> set mac 1 00:1E:67:CB:E8:C9
    
  4. Send packets.

    Pktgen> start 0-1
    

For more details, see the pktgen documentation.

Appendix A: Use pass-through for virtual machines

This section explains how to set up a virtual environment where virtual machines control the NICs on the host.

  1. Create a new directory and move to it.

  2. Download or create a start_qemu.sh script for running a kvm virtual machine:

    sudo curl -O https://cdn.download.clearlinux.org/image/start_qemu.sh
    
  3. Download a bare-metal image of Clear Linux OS and rename it as clear.img.

  4. Look for an Ethernet* device entry that contains vendor and device ID:

    sudo lspci -nn | grep Ethernet
    

    An example output:

    03:00.0 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521]
    

    where 03:00.0 is the device entry and 8086:1521 is the vendor:device ID. Record this information, because you need it to unbind the NICs from a host.

  5. Unbind the NICs from the host to do pass-through with virtual machines. Clear Linux OS supports this action. The commands take the format:

    echo "vendor device_ID" > /sys/bus/pci/drivers/pci-stub/new_id
    echo "entry for device" > /sys/bus/pci/drivers/igb/unbind
    echo "entry for device" > /sys/bus/pci/drivers/pci-stub/bind
    echo "vendor device_ID" > /sys/bus/pci/drivers/pci-stub/remove_id
    

    Here is a working example:

    echo "8086 1521" | sudo tee /sys/bus/pci/drivers/pci-stub/new_id
    echo "0000:03:00.0" | sudo tee /sys/bus/pci/drivers/igb/unbind
    echo "0000:03:00.0" | sudo tee /sys/bus/pci/drivers/pci-stub/bind
    echo "8086 1521" | sudo tee /sys/bus/pci/drivers/pci-stub/remove_id
    
  6. Assign the unbound NICs to the KVM virtual machine (guest). Modify the start_qemu.sh script in qemu-system-x86_64 arguments, and add the lines with the host’s NICs information in the format:

    -device pci-assign,host="<entry for device>",id=passnic0,addr=03.0
    -device pci-assign,host="<entry for device>",id=passnic1,addr=04.0
    

    Here is a working example:

    -device pci-assign,host=03:00.0,id=passnic0,addr=03.0 \
    -device pci-assign,host=03:00.3,id=passnic1,addr=04.0 \
    
  7. Add more NUMA machines to the virtual machine by adding lines to the Makefile boot target in the format:

    -numa node,mem=<memory>,cpus=<number of cpus>
    

    Here is a working example for a virtual machine with 4096 memory and four CPUs:

    -numa node,mem=2048,cpus=0-1 \
    -numa node,mem=2048,cpus=2-3 \
    

    Note

    Each NUMA machine must use the same quantity of memory.

  8. Run the start_qemu.sh script.