dropdown menu

NPIV (Virtual Fibre Channel Adapter)

With NPIV, you can configure the managed system so that multiple logical partitions can access independent physical storage through the same physical fibre channel adapter. (NPIV means N_Port ID Virtualization. N_Port ID is a storage term, for node port ID, to identify ports on the nod (FC Adpater) in the SAN area.)
To access physical storage in a typical storage area network (SAN) that uses fibre channel, the physical storage is mapped to logical units (LUNs) and the LUNs are mapped to the ports of physical fibre channel adapters. Each physical port on each physical fibre channel adapter is identified using one worldwide port name (WWPN).

NPIV is a standard technology for fibre channel networks that enables you to connect multiple logical partitions to one physical port of a physical fibre channel adapter. Each logical partition is identified by a unique WWPN, which means that you can connect each logical partition to independent physical storage on a SAN.

To enable NPIV on the managed system, you must create a Virtual I/O Server logical partition (version 2.1, or later) that provides virtual resources to client logical partitions. You assign the physical fibre channel adapters (that support NPIV) to the Virtual I/O Server logical partition. Then, you connect virtual fibre channel adapters on the client logical partitions to virtual fibre channel adapters on the Virtual I/O Server logical partition. A virtual fibre channel adapter is a virtual adapter that provides client logical partitions with a fibre channel connection to a storage area network through the Virtual I/O Server logical partition. The Virtual I/O Server logical partition provides the connection between the virtual fibre channel adapters on the Virtual I/O Server logical partition and the physical fibre channel adapters on the managed system.

The following figure shows a managed system configured to use NPIV:

on VIO server:
root@vios1: / # lsdev -Cc adapter
fcs0      Available 01-00 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)
fcs1      Available 01-01 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)
vfchost0  Available       Virtual FC Server Adapter
vfchost1  Available       Virtual FC Server Adapter
vfchost2  Available       Virtual FC Server Adapter
vfchost3  Available       Virtual FC Server Adapter
vfchost4  Available       Virtual FC Server Adapter

on VIO client:
root@aix21: /root # lsdev -Cc adapter
fcs0 Available C6-T1 Virtual Fibre Channel Client Adapter
fcs1 Available C7-T1 Virtual Fibre Channel Client Adapter

Two unique WWPNs (world-wide port names) starting with the letter "c" are generated by the HMC for the VFC client adapter. The pair is critical and both must be zoned if Live Partition Migration is planned to be used. The virtual I/O client partition uses one WWPN to log into the SAN at any given time. The other WWPN is used when the client logical partition is moved to another managed system using PowerVM Live Partition Mobility.

lscfg -vpl fcsX will show only the first WWPN
fcstat fcsX will show only the active WWPN

Both of them are showing only 1 WWPN but fcstat will show always the active WWPN which is in use (which will change after an LPM), however lscfg will show as a static value the 1st WWPN assigned to the HBA only.

One VFC client adapter per physical port per client partition and maximum 64 active VFC client adapter per physical port. There is always one-to-one relationship between the virtual Fibre Channel client adapter and the virtual Fibre Channel server adapter.

The difference between traditional redundancy with SCSI adapters and the NPIV technology using virtual Fibre Channel adapters is that the redundancy occurs on the client, because only the client recognizes the disk. The Virtual I/O Server is essentially just a pass-through managing the data transfer through the POWER hypervisor. When using Live Partition Mobility storage moves to the target server without requiring a reassignment (opposite with virtual scsi), because the virtual Fibre Channels have their own WWPNs that move with the client partitions on the target server.

After creating an FC client adapter, and trying to make it persistent across restarts, another different pair of virtual WWPNs would be generated, when creating the adapter in the profile. To prevent this undesired situation, which would require another SAN zoning and storage configuration, make sure to save any virtual Fibre Channel client adapter DLPAR changes into a new partition profile by selecting: Configuration -> Save Current Configuration and change the default partition profile to the new profile.


NPIV clients num_cmd_elem attribute should not exceed the VIOS adapter’s num_cmd_elems.
If you increase num_cmd_elems on the virtual FC (vFC) adapter, then you should also increase the setting on the real FC adapter.


Check NPIV adapter mapping on client:

root@bb_lpar: / # echo "vfcs" | kdb                                         <--vfcs is a kdb subcommand
fcs0      0xF1000A000033A000  0x0008  aix-vios1 vfchost8  0x01    0x0000    <--shows which vfchost is used on vio server for this client
fcs1      0xF1000A0000338000  0x0008  aix-vios2 vfchost6  0x01    0x0000


NPIV creation and how they are related together:

FCS0: Physical FC Adapter installed on the VIOS
VFCHOST0: Virtual FC (Server) Adapter on VIOS
FCS0 (on client): Virtual FC adapter on VIO client

Creating NPIV adapters:
0. install physical FC Adapters to VIO Servrs
1. HMC -> VIO Server -> DLPAR -> Virtual Adapter (don't forget profile (save current))
2. HMC -> VIO Client -> DLPAR -> Virtual Adapter (the ids should be mapped, don't forget profile)
3. cfgdev (VIO server), cfgmgr (client)    <--it will bring up the new adapter vfchostX on vio server, fcsX on client
4. check status:
    lsdev -dev vfchost*                    <--lists virtual FC server adapters
    lsmap -vadapter vfchost0 -npiv         <--gives more detail about the specified virtual FC server adapter
    lsdev -dev fcs*                        <--lists physical FC server adapters
    lsnports                               <--checks NPIV readiness (fabric=1 means npiv ready)
5. vfcmap -vadapter vfchost0 -fcp fcs0      <--mapping the virtual FC adapter to the VIO's physical FC
6. lsmap -all -npiv                        <--checks the maping
7. HMC -> VIO Client -> get the WWN of the adapter    <--if no LPM will be used only the first WWN is needed
8. SAN zoning


Replacement of a physical FC adapter with NPIV

1. identify the adapter

$ lsdev -dev fcs4 -child
name             status      description
fcnet4           Defined     Fibre Channel Network Protocol Device
fscsi4           Available   FC SCSI I/O Controller Protocol Device

2. unconfigure the mappings

$ rmdev -dev vfchost0 -ucfg
vfchost0 Defined

3. FC adapters and their child devices must be unconfigured or deleted

$ rmdev -dev fcs4 -recursive -ucfg
fscsi4 Defined
fcnet4 Defined
fcs4 Defined

4. diagmenu
DIAGNOSTIC OPERATING INSTRUCTIONS -> Task Selection -> Hot Plug Task -> PCI Hot Plug Manager -> Replace/Remove a PCI Hot Plug Adapter.


Changing WWPN number:
There are 2 methods: changing dynamically (chhwres) or changing in the profile (chsyscfg). Both of them are similar and both of them done in HMC CLI.

I. Changing dynamically:

1. get current adapter config:
# lshwres -r virtualio --rsubtype fc -m <man. sys.> --level lpar | grep <LPAR name>

2. remove adapter from client LPAR: rmdev -Rdl fcsX (if needed unmanage device prior from storage driver)

3. remove adapter dynamically from HMC (it can be done in GUI)

4. create new adapter with new WWPNS dynamically:
# chhwres -r virtualio -m  -o a -p aix_lpar_01 --rsubtype fc -a "adapter_type=client,remote_lpar_name=aix_vios1,remote_slot_num=123,\"wwpns=c0507603a42102de,c0507603a42102df\"" -s 8

5. cfgmgr on client LPAR will bring up adapter with new WWPNs.

6. save actual config to profile (so next profile activation wil not bring back old WWPNs)

(vfc mapping removal did not needed in this case, if there are some problems try reconfig. that one as well at VIOS side)


II. changing in the profile:

same as above just some commands are different:

get current config:
# lssyscfg -r prof -m <man. sys.> --filter lpar_names=aix_vios1
aix_lpar01: default:"""6/client/1/aix_vios1/5/c0507604ac560004,c0507604ac560005/1"",""7/client/1/aix_vios1/4/c0507604ac560018,c0507604ac560019/1"",""8/client2/aix_vios2/5/c0507604ac56001a,c0507604ac56001b/1"",""9/client/2/aix_vios2/4/c0507604ac56001c,c0507604ac56001d/1"""

create new adapters in the profile:
chsyscfg -m <man. sys.> -r prof  -i 'name=default,lpar_id=5,"virtual_fc_adapters+=""7/client/1/aix-vios1/4/c0507604ac560006,c0507604ac560007/1"""'

-m             - managed system
-r prof        - profile will be changed
-i '           - attributes
name=default   - name of the profile, which will be changed
lpar_id=5      - id of the client LPAR
7              - adapter id on client (slot id)
client         - adapter type
1              - remote PLAR id (VIOS server LPAR id)
aix_vios1      - remote LPAR name (VIOS server name)
4              - remote slote number (adapter id on VIOS server)
WWPN           - both WWPN numbers (separated  with , )
1              - required or desired (1- required, 0- desired)

Here VFC unmapping was needed:
vfcmap -vadapter vfchost4 -fcp        <--remove mapping
vfcmap -vadapter vfchost4 -fcp fcs2        <--create new mapping



  1. hi,

    is there any way to find out WWPN of physical HBA for NPIV adapter from AIX client.


    1. I'm not aware of any commands, which will show WWPNs of separate systems in 1 shot.
      I would log in to the VIO server and I would check there.

      If you find some good solution, you can share with me :)

    2. Try:
      echo "vfcs" | kdb


  2. On the HMC:
    lssyscfg -r sys -F name |
    while read M; do lshwres -r virtualio --rsubtype fc --level lpar -m $M -F lpar_name,wwpns|
    sed 's/^/'$M,'/'

    1. I'll check that... it sounds good, but let me be very-very-very precise, the original question was:"...from AIX client."

      But honestly, I really appreciate your solution, thx !!! :)

    2. hscroot@localhost:~> lssyscfg -r sys -F name |while read M;
      > do
      > lshwres -r virtualio --rsubtype fc --level lpar -m $M -F lpar_name,wwpns|sed 's/^/'$M,'/'
      > done
      ServerG-9117-MMA-SN066FAE4,No results were found.
      ServerA-9117-MMA-SN0670A54,No results were found.
      ServerB-9117-MMA-SN066FE74,No results were found.
      ServerD-9117-MMA-SN066FF94,No results were found.
      ServerC-9117-MMA-SN066FEC4,No results were found.

  3. hi,

    above command from HMC will list the WWPN of virtual fiber card on LPARs only.

    i am interested in finding wwpn of physical HBAs allocated in VIO and physical to virtual hba relation for each LPAR.


    1. Hi,
      As far as I know WWPN of physical HBA's are not stored on HMC, so you have to log in to VIO server to find these out.

  4. The vfcmap command only allows you to map one vfchost to a fcs fibre port. After you have done the mapping from a certain vfchost to a certain fcsX, does it still allow you to mapping another vfchost to the same fcsX?

    1. Yes, you can map more vfchosts to the same fcsX. In this way you can virtualize one physical adapter (fcsX), and several LPARs can use it with these mappings.

    2. Thanks so much. I cannot really find any documentation that directly talks about this. (My server has not arrived yet. ) Will you be able to point me to some IBM doc that talks about this? THANKS.

    3. IBM Redbooks can be very useful. For example:
      IBM PowerVM Virtualization Introduction and Configuration: http://www.redbooks.ibm.com/abstracts/sg247940.html
      IBM PowerVM Virtualization Managing and Monitoring: http://www.redbooks.ibm.com/abstracts/sg247590.html

  5. Hi,

    What are the advantages and disadvantages of using NPIV over vscsi?


    1. Hi, with vscsi all the SAN storage is assigned to VIO server. The necessary storage driver is installed on VIO server only, the clients does not need any additional driver. Maintaining this storage driver is very simple, because it is installed only on VIO server, but on VIO server could have hundreds of disks, which make administration very complex on VIO server.

      With NPIV, disks are assigned directly to VIO clients, VIO server does not know anything about these disks. Storage driver has to be installed on every client, so if you need to update storage driver, this must be done on every client.

      I cannot tell you directly which one is better, both of them are good enough. One point which could be important, if you want to use the capabilities of you storage driver on your clients (load balancing or other special settings) then with NPIV you can achieve this.


  6. Hi ,

    I've four fcs on my both VIO servers which are as below.
    One of my Client LPAR's vfchost is mapped to fcs2 on the both VIO servers.
    I want to change the mapping on my 2nd VIO to use fcs3 instead of fcs2.
    I've used the below command to do the changes.
    #vfcmap -vadapter vfchost3 fcs
    #vfcmap -vadapter vfchost3 fcs fcs3

    This command works okay on VIO but when I login to client LPAR then there I can see the fscsi1 path used to fail for all hdisks which are comign from this VIO. I'ved tried to remvoe fscsi1 and ran cfgmge the fscsi is detected on Client LPAR but the path does not come up in Enabled State.

    Can anybody help me with that ?

    1. Hi,

      correct command to unmap a vfchost from any physical fibre channel:
      vfcmap -vadapter vfchost3 -fcp

      Then correct command to map virtual fibre channel (vfchost3) to physical fibre channel(fcs3):
      vfcmap -vadapter vfchost3 -fcp fcs3

    2. Hi ,

      I've tried that too but it didn't help.
      Any other suggestion ?


    3. Hi,

      I did have the same problem with 16pcs of NPIV connected hosts and it was all down to some communication device to the fabric. In my case sfwcomm7.
      So you have to delete all child devices you have on the adapter in my case fcs4 (moved away from fcs3)
      Note! There cant be anything else running on this device as we are going to delete it!

      First unmap and update all vfchost0..15 from fcs3
      for i in 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
      vfcmap -vadapter vfchost$i -fcp
      rmdev -dev vfchost$i -ucfg

      Update configuration on fcs4 (Witch failed for me)
      rmdev -dev fcs4 -ucfg

      List child devices and delete them afterwards
      lsdev -dev fcs4 -child
      lsdev -dev fscsi4 -child
      rmdev -dev sfwcomm7
      rmdev -dev fscsi4
      rmdev -dev fcnet4

      Configure devices

      Map to fcs4
      for i in 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
      vfcmap -vadapter vfchost$i -fcp fcs4

      This solved the NPIV access for me.

      /Michael Linde

  7. plz upload documents about LPM!!!!!!

    1. it is here: http://aix4admins.blogspot.hu/2013/04/live-partition-mobility-live-partition.html

  8. How to perform LPM of NPIV CLEINT?Detailed steps would be much appreciated

    1. please check "Steps needed for LPM" section here: http://aix4admins.blogspot.hu/2013/04/live-partition-mobility-live-partition.html

  9. Thanks for the link!!But do we need to have storage guys zoning the wwpn at the destination side too?please update

    1. If you use NPIV, the virtual FC client adapter will have 2 WWPN numbers.(You can check it on HMC GUI of the adapter properties.) Storage team should zone disks for these 2 WWPN numbers if you plan to use LPM. Zoning should be done for both numbers!(1st WWPN is for normal usage, 2nd WWPN is used at LPM.)

  10. Hello,

    I have added 2 Virtual Fiber card using DLPAR, but I forgot to save current profile. i have shutdown my lpar and activate LPAR from HMC and lost my NPIV virtual firber adapter. is there any method to recover old virtual HBA? is there any way to assing old WWPN to adapter?

    1. I would get in contact with IBM support. (If you find out something, you can share with us.)

    2. You can attempt to re-assign the old WWPNs to the vfc client adapter on your LPAR profile from HMC using the following command:

      $ chsyscfg -r prof -m -i "name=,lpar_name=,\"virtual_fc_adapters=\"\"/client/////\"\""

    3. Apologies, the editor took away a part of the command before while formatting it internally (the parts inside angular brackets):

      $ chsyscfg -r prof -m [managed system] -i "name=[profile name],lpar_name=[lpar name],\"virtual_fc_adapters=\"\"[vfc_client_slot_num]/client/[vios_partition_id]/[vios_partition_name]/[vios_vfc_server_slot_num]/[Old_WWPNs]/[is_required_flag]\"\""

  11. Hi is there any way to find whether the fc adapter have nib capabule r not how do i know

    1. Hi, what is "nib capabule"?

    2. NIB is Network interface backup. which we normally user in LPAR client to avoid single point of failure( with NIC)

  12. It is possible to dynamically remap a vfchost adapter to another physical NPIV capable Fibre Channel adapter (fcs#) using vfcmap command. The ability to do so depends on the AIX client and VIO server levels. It requires
    Virtual I/O Server 2.1.2 or higher
    AIX 5.3 TL 11 or higher (feature was introduced with IZ51404)
    IZ33540 (5.3 TL 11)
    IZ51404 (5.3 TL 12)
    IZ33541 (6.1 TL 4)
    IZ51405 (6.1 TL 5)

    To verify if the APAR is installed, ran
    # instfix -ik IZ#####
    In the following example, we are unmapping vfchost0 from fcs0 to fcs1 (while the client is up and running):
    To unmap, ran vfcmap -vadapter vfchost0 -fcp
    To remap, ran vfcmap -vadapter vfchost0 -fcp fcs1

    1. Hi, thanks a lot, good description!


  13. is there any way to mirgrate vscsi to fscsi ?
    I mean virtual scsi to NPIV ?

    I know both are diffrent tech.

    1. Yes, you configure NPIV with additional LUNs, then make a mirror to new disks and after remove old disks.

  14. how to find wich vio serve the fc--- adapter for client from client level

    1. Try the following:
      # echo "vfcs" | kdb

  15. for this following command in NPIV enviroment not working , can you help me on that : mkvdev -fbo -vadapter vhost0 , which is
    mkvdev -fbo -vadapter vfchost0
    but it is not working, do you have any idea, i am trying to create virtual optical file based device for a VFC host,

  16. Hello Good day!

    Do you think if we have mutilpathing software installed at NPIV vio end will be use ful in managing FCS? if yes please update why

  17. How do we see which pair of NPIVs you are using for an LPAR ?

  18. Replies
    1. Hi, on HMC, check adapter properties.

  19. What is the draw back of NPIV? What my guess says is we need to rezone all disks again after replacement failed card. isn't?

  20. Hi

    I have faulty FC adapter due to a lot of IO errors on some LPARs. I replaced the adapter (by just removing the adapter and put new one from stock) and noticed same WWPN allocated to all LPARS, and since my LPARs boot from LUNs all booted properly.

    My question is - WWPN should change after HBA card replcement? or since HMC generate it through hypervisor, nothing should be changed?

    1. It should not change. Just unmap fsc and vfchost on VIO server

      On Client - unmanage fcs0 from mulitpath driver (if any, like powerpath, SDDPCM, HDLM)
      rmdev -l fcs0

      On VIO server - vfcmap -vadapter vfchost0 -fcp # unmap fcs to vfchost
      rmdev -l fcs4 # puts the adapter to defined state
      diag > Task Selection > Hot Plug Task > PCI Hot Plug Manager > Replace/Remove a PCI Hot Plug Adapter
      vfcmap -vadapter vfchost0 -fcp fcs4

      On Client - cfgmgr
      put it to be managed by multipath software (if any)

      Hope this helps :)

  21. Hi

    I need to replace FC adapter with NPIV config for 10 LPARs, the question is -- the Physical WWPN change, but virtual WWPN change?. I have to update zonning? or the replacement procedure cannot change my configuration for NPIV.

    Thanks for your comments

  22. What is the maximum LUN number assigned to NPIV VIO Client?
    I believe there is a vfsc limitation of 256 LUNs, but could anyone confirm it?