NPIV (Virtual Fibre Channel Adapter)
With NPIV, you can configure the managed system so that multiple logical partitions can access independent physical storage through the same physical fibre channel adapter. (NPIV means N_Port ID Virtualization. N_Port ID is a storage term, for node port ID, to identify ports on the nod (FC Adpater) in the SAN area.)
To access physical storage in a typical storage area network (SAN) that uses fibre channel, the physical storage is mapped to logical units (LUNs) and the LUNs are mapped to the ports of physical fibre channel adapters. Each physical port on each physical fibre channel adapter is identified using one worldwide port name (WWPN).
NPIV is a standard technology for fibre channel networks that enables you to connect multiple logical partitions to one physical port of a physical fibre channel adapter. Each logical partition is identified by a unique WWPN, which means that you can connect each logical partition to independent physical storage on a SAN.
To enable NPIV on the managed system, you must create a Virtual I/O Server logical partition (version 2.1, or later) that provides virtual resources to client logical partitions. You assign the physical fibre channel adapters (that support NPIV) to the Virtual I/O Server logical partition. Then, you connect virtual fibre channel adapters on the client logical partitions to virtual fibre channel adapters on the Virtual I/O Server logical partition. A virtual fibre channel adapter is a virtual adapter that provides client logical partitions with a fibre channel connection to a storage area network through the Virtual I/O Server logical partition. The Virtual I/O Server logical partition provides the connection between the virtual fibre channel adapters on the Virtual I/O Server logical partition and the physical fibre channel adapters on the managed system.
The following figure shows a managed system configured to use NPIV:
on VIO server:
root@vios1: / # lsdev -Cc adapter
fcs0 Available 01-00 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)
fcs1 Available 01-01 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)
vfchost0 Available Virtual FC Server Adapter
vfchost1 Available Virtual FC Server Adapter
vfchost2 Available Virtual FC Server Adapter
vfchost3 Available Virtual FC Server Adapter
vfchost4 Available Virtual FC Server Adapter
on VIO client:
root@aix21: /root # lsdev -Cc adapter
fcs0 Available C6-T1 Virtual Fibre Channel Client Adapter
fcs1 Available C7-T1 Virtual Fibre Channel Client Adapter
Two unique WWPNs (world-wide port names) starting with the letter "c" are generated by the HMC for the VFC client adapter. The pair is critical and both must be zoned if Live Partition Migration is planned to be used. The virtual I/O client partition uses one WWPN to log into the SAN at any given time. The other WWPN is used when the client logical partition is moved to another managed system using PowerVM Live Partition Mobility.
lscfg -vpl fcsX will show only the first WWPN
fcstat fcsX will show only the active WWPN
Both of them are showing only 1 WWPN but fcstat will show always the active WWPN which is in use (which will change after an LPM), however lscfg will show as a static value the 1st WWPN assigned to the HBA only.
One VFC client adapter per physical port per client partition and maximum 64 active VFC client adapter per physical port. There is always one-to-one relationship between the virtual Fibre Channel client adapter and the virtual Fibre Channel server adapter.
The difference between traditional redundancy with SCSI adapters and the NPIV technology using virtual Fibre Channel adapters is that the redundancy occurs on the client, because only the client recognizes the disk. The Virtual I/O Server is essentially just a pass-through managing the data transfer through the POWER hypervisor. When using Live Partition Mobility storage moves to the target server without requiring a reassignment (opposite with virtual scsi), because the virtual Fibre Channels have their own WWPNs that move with the client partitions on the target server.
After creating an FC client adapter, and trying to make it persistent across restarts, another different pair of virtual WWPNs would be generated, when creating the adapter in the profile. To prevent this undesired situation, which would require another SAN zoning and storage configuration, make sure to save any virtual Fibre Channel client adapter DLPAR changes into a new partition profile by selecting: Configuration -> Save Current Configuration and change the default partition profile to the new profile.
Check NPIV adapter mapping on client:
root@bb_lpar: / # echo "vfcs" | kdb <--vfcs is a kdb subcommand
NAME ADDRESS STATE HOST HOST_ADAP OPENED NUM_ACTIVE
fcs0 0xF1000A000033A000 0x0008 aix-vios1 vfchost8 0x01 0x0000 <--shows which vfchost is used on vio server for this client
fcs1 0xF1000A0000338000 0x0008 aix-vios2 vfchost6 0x01 0x0000
NPIV creation and how they are related together:
FCS0: Physical FC Adapter installed on the VIOS
VFCHOST0: Virtual FC (Server) Adapter on VIOS
FCS0 (on client): Virtual FC adapter on VIO client
Creating NPIV adapters:
0. install physical FC Adapters to VIO Servrs
1. HMC -> VIO Server -> DLPAR -> Virtual Adapter (don't forget profile (save current))
2. HMC -> VIO Client -> DLPAR -> Virtual Adapter (the ids should be mapped, don't forget profile)
3. cfgdev (VIO server), cfgmgr (client) <--it will bring up the new adapter vfchostX on vio server, fcsX on client
4. check status:
lsdev -dev vfchost* <--lists virtual FC server adapters
lsmap -vadapter vfchost0 -npiv <--gives more detail about the specified virtual FC server adapter
lsdev -dev fcs* <--lists physical FC server adapters
lsnports <--checks NPIV readiness (fabric=1 means npiv ready)
5. vfcmap -vadapter vfchost0 -fcp fcs0 <--mapping the virtual FC adapter to the VIO's physical FC
6. lsmap -all -npiv <--checks the maping
7. HMC -> VIO Client -> get the WWN of the adapter <--if no LPM will be used only the first WWN is needed
8. SAN zoning
Replacement of a physical FC adapter with NPIV
1. identify the adapter
$ lsdev -dev fcs4 -child
name status description
fcnet4 Defined Fibre Channel Network Protocol Device
fscsi4 Available FC SCSI I/O Controller Protocol Device
2. unconfigure the mappings
$ rmdev -dev vfchost0 -ucfg
3. FC adapters and their child devices must be unconfigured or deleted
$ rmdev -dev fcs4 -recursive -ucfg
DIAGNOSTIC OPERATING INSTRUCTIONS -> Task Selection -> Hot Plug Task -> PCI Hot Plug Manager -> Replace/Remove a PCI Hot Plug Adapter.
- FS - LVM
- HACMP - POWERHA
- HMC - ISD
- UPDATE - INSTALL