dropdown menu


vNIC adapter

The SR-IOV implementation on Power servers has an additional feature, it is called vNIC (virtual Network Interface Controller). vNIC is a type of virtual Ethernet adapter, that is configured on the LPAR. Each vNIC is backed by an SR-IOV logical port (LP) that is available on the VIO server.  The key advantage of placing the SR-IOV logical port on the VIO, that it makes the client LPAR eligible for Live Partition Mobility. (Although the backing device resides remotely, through a so called LRDMA (Logical Redirected DMA) technology, the vNIC can map its transmit and receive buffers directly to the remote SR-IOV logical port). 

The above picture shows that data is transferred from the memory of client LPAR to the SR-IOV adapter directly without being copied to the VIOS memory. 

vNIC configuration happens on the HMC in a single step (only in the Enhanced GUI). When adding a vNIC adapter to the LPAR, it will create all necessary adapters automatically on VIO (SR-IOV logical port, vNIC server adapter) and on LPAR (vNIC client adapter). From the user perspective no additional configuration is needed at VIOS side.

It is really just 1 step on the Enhanced GUI interface:  choose LPAR --> Virtual NICs --> Add Virtual NIC (choose a VIO server, capacity)
In addition, a Port VLAN ID (PVID) may be configured for an SR-IOV logical port to provide VLAN tagging and untagging.

on VIO:
$ lsmap -all -vnic
Name          Physloc                            ClntID ClntName       ClntOS
------------- ---------------------------------- ------ -------------- -------
vnicserver0   U8608.66E.21ABC7W-V1-C32897             3 vnic_lpar      AIX

Backing device:ent14
Client device name:ent1
Client device physloc:U8608.66E.21ABC7W-V3-C3

on client LPAR:
# lsdev -Cc adapter
ent0   Available  Virtual I/O Ethernet Adapter (l-lan)
ent1   Available  Virtual NIC Client Adapter (vnic)

Regarding LPM:
When doing LPM, target system must have an adapter in SR-IOV shared mode with an available logical port and available capacity on a physical port. Additionally if labels are correctly set on the SR-IOV ports, during LPM the good physical port will be automatically chosen depending on the names of the label.


vNIC and Etherchannel (NIB)

If 2 vNICs are created, which are coming from different VIO servers, then high availability can be achieved by creating an Etherchannel on top of these (one adapter is active, the other is backup)

# lsdev -Cc adapter
ent0   Available  Virtual I/O Ethernet Adapter (l-lan)
ent1   Available  Virtual NIC Client Adapter (vnic)       <-- coming from VIO1
ent2   Available  Virtual NIC Client Adapter (vnic)       <-- coming from VIO2

After creating etherchannel (NIB, Network Interface Backup config) with an IP address, in kdb we can check status of vNIC adapters:

# echo "vnic" | kdb
|       pACS       | Device | Link |    State     |
| F1000A00328C0000 |  ent1  |  Up  |     Open     |
| F1000A00328E0000 |  ent2  |  Up  |     Open     |

If VIO1 goes down:
59224136   0320144317 P H ent3           ETHERCHANNEL FAILOVER

# echo "vnic" | kdb
|       pACS       | Device | Link |    State     |
| F1000A00328C0000 |  ent1  | Down |   Unknown    |
| F1000A00328E0000 |  ent2  |  Up  |     Open     |


vNIC failover

vNIC failover provides high availability solution at LPAR level. In this configuration, a vNIC client adapter can be backed by multiple logical ports to avoid a single point of failure. At any time, only one logical port is connected to the vNIC client (similar to a NIB configuration, but only 1 adapter exists at the LPAR). If the active connection fails, a new backing device is selected by the Hypervisor.

Prerequisites for vNIC failover:
- VIOS Version 2.2.5
- System Firmware Release 860.10
- HMC Release 8 Version 8.6.0
- AIX 7.1 TL4 or AIX 7.2

vNIC failover is achieved by monitoring the logical port (link status) and the VIOS health status, and reporting these to Power Hypervisor in regular heartbeats. In case of an error Hypervisor will switch traffic to the next backing device. 

Each backing device has a priority value (smaller number means higher priority) and a failover policy.

For example, when a vNIC is initialized, the Power Hypervisor selects the logical port with the highest priority (10 in the above picture). If that port goes down, the Hypervisor is notified, then selects, from the remaining (functional) backing devices, the logical port with the highest priority (20 in the above picture) as the next active backing device. Later, when the previously failed logical port recovers, the Hypervisor will switch back to the recovered port, if the priority failover policy has “auto priority failover” enabled; otherwise, it will stay there until the next failure occurs.

Creating vNIC failover configuration is simple:
Choose LPAR --> Virtual NICs --> Add Virtual NIC

The already existing vNIC adapters (without failover) do not need to be removed, those can be modified online.

With "Add entry" you can add new lines for backing devices (with priority value)

Auto Priority Failover: Enabled means, when the active backing device has not the highest priority (e.g. a higher priority device comes online), the hypervisor will automatically failover to this new device. (I have chosen "Disabled", as probably manual failback is better, in a convenient time.)

Failover Priority: Smaller number means higher priority. (During failure the next highest priority device will be provide network traffic.)

One small/side comment:
As documentation says, vNIC failover occurs if Link goes down, but in rare cases it can happen that Link status is up, however at switch side network is down (for example routing problem). In these rare cases no failover will happen (as Link status is still up), but at LPAR side network will be down. (If both Links would be active (as in a sort of LACP configuration, connected to different switches), or a ping possibility would be implemented, then this would not be a problem.)


How to check which VIO (and Adapter) provides the network traffic

On AIX entstat will show many details which VIO is currently utilized for the network:

# entstat -d entX | tail

Server Information:
        LPAR ID: 1
        LPAR Name: VIO1
        VNIC Server: vnicserver1
        Backing Device: ent17
        Backing Device Location: U98D8.001.SRV3242-P1-C9-T2-S4



Martin B. said...
This comment has been removed by the author.
Martin B. said...

Can I have (probably stupid) question? Can I see on the VIO level if there is a failover VNIC configured?

aix said...

Unfortunately, I am not aware of any commands which could help with that.
(this does not mean it should not be possible somehow.)

pg said...

How does the vNIC performance compare to directly assigning the SR-IOV adapter to the lpar?

aix said...

As far I know IBM says these have pretty the same performance. I did a short search and found these performance tests:
- SR-IOV internal switching with new 100 Gb adapters and Jumbo Frames POWER9 S924 (9009-42A): 87.6 Gbits/sec
- vNIC external switching with new PCIe4 100 Gb adapters POWER9 E980 (9080-M9S): 76.2 Gbits/sec

In the first example the network packet did not leave the box (internal switching) while in the 2nd example it went outside and vNIC could produce similar performance.

Unknown said...

You can see on the HMC level if there is one or more backing devices, as well as the failover priority for the vNIC/s. lshwres -m "hostname or IP" -r virtualio --rsubtype vnic --filter lpar_ids="lpar id#". Format of the SRIOV options.


Where capacity is optional and defaults to 2, failover-priority is optional and defaults to 50.

This string will begin after the "backing_devices" return from the above command. The HMC cli is a giant PITA, so be warned...

aix said...

wow..thx a lot for this hint :)

Anonymous said...

Hi, the capacity of vNIC belong to SR-IOV port capacity or belong to SR-IOV capacity assign to VIOS?