Network Teaming: smart NIC management in RHEL/Centos

Network teaming in RHEL/Centos allows aggregation of multiple network card into one virtual interface in order to offer better performance. There are several ways of configuring the teaming based on the desired performance criteria. Some of the benefits of nic teaming includes higher network throughput, better failover support and also better reliability. For example, if a system has two 100/mbps network card, combining them to act as one will definitely increase the available bandwidth. they can also be configured as active backup via Network Teaming, where connectivity will continue even if one of the underlying interface fails.

Configuration tool:

Several tool in  RHEL/Centos can be used to configure teaming. nmtui (network manager text user interface or nmcli(network manager command line interface) or GUI application could be used. nmcli will be used in this tutorial, since it is quite fast ,handy and pretty comprehensive for the user.

Setup for Network Teaming:

We have a server with two network card . we will team them to present as one virtual nic.

First check the nic present in the system.

[root@localhost ~]# nmcli con show
NAME                UUID                                  TYPE            DEVICE 
Wired connection 1  9a8a14b2-2322-402c-a6f0-04a5c47d63ee  802-3-ethernet  enp0s8 
enp0s3              541201f9-deb4-4868-860a-1f4e0c3d4e09  802-3-ethernet  enp0s3

Here we can see two interface enp0s08 and enp0s3 present with their connection profile. Get rid of these old settings.

[root@localhost ~]# nmcli con del "Wired connection 1"
Connection 'Wired connection 1' (9a8a14b2-2322-402c-a6f0-04a5c47d63ee) successfully deleted.
[root@localhost ~]# nmcli con del  enp0s3
Connection 'enp0s3' (541201f9-deb4-4868-860a-1f4e0c3d4e09) successfully deleted.

Now we configure the team interface.

[root@localhost ~]# nmcli con add type team con-name team0 config '{ "runner" : { "name" : "loadbalance"}}'
Connection 'team0' (1c56f367-19d8-4917-b481-f302c0902b78) successfully added.

Then we configure the slave interface to join the team.

[root@localhost ~]# nmcli con add type team-slave ifname enp0s8 master team0
Connection 'team-slave-enp0s8' (887d38d7-babb-4b18-9630-5329ece5b1a6) successfully added
[root@localhost ~]# nmcli con add type team-slave ifname enp0s3 master team0
Connection 'team-slave-enp0s3' (4ac00333-0f33-4d9e-a20e-0707c4549909) successfully added

At this point, we should have our team interface and both the team slave interface  configured , we can verify this with the following command.

[root@localhost ~]# nmcli con show

NAME                 UUID                                  TYPE            DEVICE   
team-slave-enp0s8    887d38d7-babb-4b18-9630-5329ece5b1a6  802-3-ethernet  enp0s8   
team0                1c56f367-19d8-4917-b481-f302c0902b78  team            nm-team 
team-slave-enp0s3    4ac00333-0f33-4d9e-a20e-0707c4549909  802-3-ethernet  enp0s3  

Now we need to bring the up the interfaces.

[root@localhost ~]# nmcli con up team0
Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4)
[root@localhost ~]# nmcli con up team-slave-enp0s8
[root@localhost ~]# nmcli con up team-slave-enp0s3

Now our team interface should be active. With this default setting , our team interface will get its ip address via dhcp. we can assign a static ip also.

[root@localhost ~]# nmcli con mod team0 ipv4.addresses

Verify the Teaming:

[root@localhost ~]# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet scope host lo

       valid_lft forever preferred_lft forever

    inet6 ::1/128 scope host 

       valid_lft forever preferred_lft forever

2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master nm-team state UP qlen 1000

    link/ether 08:00:27:2c:91:70 brd ff:ff:ff:ff:ff:ff

3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master nm-team state UP qlen 1000

    link/ether 08:00:27:2c:91:70 brd ff:ff:ff:ff:ff:ff

4: nm-team: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 

    link/ether 08:00:27:2c:91:70 brd ff:ff:ff:ff:ff:ff

    inet brd scope global dynamic nm-team

       valid_lft 7032sec preferred_lft 7032sec

    inet6 fe80::a00:27ff:fe2c:9170/64 scope link tentative dadfailed 

       valid_lft forever preferred_lft forever

Here we can see that our team interface has an ip address , while both the slave has none. We can ping this IP from another machine.

Configuration options:

Initially we have configured our team interface as “loadbalance”, in this setting, data packets will be optimally balanced between slave interfaces using a hash function.

Various options for config in Network Teaming:

roundrobin Packets are transmitted in a roundrobin pattern via all the interfaces in the team.
activebackup failover happens in case of link state change
loadbalance optimally balanced between slave interfaces using a hash function.
lacp  load balancing done using LACP(Lightweight Access Control Protocol). Additional configuration in switches needed.

Adiidtional configuration for Virtual environment:

If you are simulating this tutorial in a virtualisation software like virtualbox or vmware, make sure the interfaces are in “promiscuous mode”, otherwise testing the failover of one interface won’t work. This can normally be done via settings in the virtual machine. Once the machine is up, use the following command

ip link set enp0s8 promisc on
ip link set enps03 promisc on

Leave a comment

Your email address will not be published. Required fields are marked *