With NPIV, you can configure the managed system so that multiple logical partitions can access independent physical storage through the same physical fibre channel adapter. (NPIV means N_Port ID Virtualization. N_Port ID is a storage term, for node port ID, to identify ports on the nod (FC Adpater) in the SAN area.)
To access physical storage in a typical storage area network (SAN) that uses fibre channel, the physical storage is mapped to logical units (LUNs) and the LUNs are mapped to the ports of physical fibre channel adapters. Each physical port on each physical fibre channel adapter is identified using one worldwide port name (WWPN).
NPIV is a standard technology for fibre channel networks that enables you to connect multiple logical partitions to one physical port of a physical fibre channel adapter. Each logical partition is identified by a unique WWPN, which means that you can connect each logical partition to independent physical storage on a SAN.
To enable NPIV on the managed system, you must create a Virtual I/O Server logical partition (version 2.1, or later) that provides virtual resources to client logical partitions. You assign the physical fibre channel adapters (that support NPIV) to the Virtual I/O Server logical partition. Then, you connect virtual fibre channel adapters on the client logical partitions to virtual fibre channel adapters on the Virtual I/O Server logical partition. A virtual fibre channel adapter is a virtual adapter that provides client logical partitions with a fibre channel connection to a storage area network through the Virtual I/O Server logical partition. The Virtual I/O Server logical partition provides the connection between the virtual fibre channel adapters on the Virtual I/O Server logical partition and the physical fibre channel adapters on the managed system.
The following figure shows a managed system configured to use NPIV:
on VIO server:
root@vios1: / # lsdev -Cc adapter
fcs0 Available 01-00 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)
fcs1 Available 01-01 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)
vfchost0 Available Virtual FC Server Adapter
vfchost1 Available Virtual FC Server Adapter
vfchost2 Available Virtual FC Server Adapter
vfchost3 Available Virtual FC Server Adapter
vfchost4 Available Virtual FC Server Adapter
on VIO client:
root@aix21: /root # lsdev -Cc adapter
fcs0 Available C6-T1 Virtual Fibre Channel Client Adapter
fcs1 Available C7-T1 Virtual Fibre Channel Client Adapter
Two unique WWPNs (world-wide port names) starting with the letter "c" are generated by the HMC for the VFC client adapter. The pair is critical and both must be zoned if Live Partition Migration is planned to be used. The virtual I/O client partition uses one WWPN to log into the SAN at any given time. The other WWPN is used when the client logical partition is moved to another managed system using PowerVM Live Partition Mobility.
lscfg -vpl fcsX will show only the first WWPN
fcstat fcsX will show only the active WWPN
Both of them are showing only 1 WWPN but fcstat will show always the active WWPN which is in use (which will change after an LPM), however lscfg will show as a static value the 1st WWPN assigned to the HBA only.
One VFC client adapter per physical port per client partition and maximum 64 active VFC client adapter per physical port. There is always one-to-one relationship between the virtual Fibre Channel client adapter and the virtual Fibre Channel server adapter.
The difference between traditional redundancy with SCSI adapters and the NPIV technology using virtual Fibre Channel adapters is that the redundancy occurs on the client, because only the client recognizes the disk. The Virtual I/O Server is essentially just a pass-through managing the data transfer through the POWER hypervisor. When using Live Partition Mobility storage moves to the target server without requiring a reassignment (opposite with virtual scsi), because the virtual Fibre Channels have their own WWPNs that move with the client partitions on the target server.
After creating an FC client adapter, and trying to make it persistent across restarts, another different pair of virtual WWPNs would be generated, when creating the adapter in the profile. To prevent this undesired situation, which would require another SAN zoning and storage configuration, make sure to save any virtual Fibre Channel client adapter DLPAR changes into a new partition profile by selecting: Configuration -> Save Current Configuration and change the default partition profile to the new profile.
-----------------------------------------------------
NPIV clients num_cmd_elem attribute should not exceed the VIOS adapter’s num_cmd_elems.
If you increase num_cmd_elems on the virtual FC (vFC) adapter, then you should also increase the setting on the real FC adapter.
-----------------------------------------------------
Check NPIV adapter mapping on client:
root@bb_lpar: / # echo "vfcs" | kdb <--vfcs is a kdb subcommand
...
NAME ADDRESS STATE HOST HOST_ADAP OPENED NUM_ACTIVE
fcs0 0xF1000A000033A000 0x0008 aix-vios1 vfchost8 0x01 0x0000 <--shows which vfchost is used on vio server for this client
fcs1 0xF1000A0000338000 0x0008 aix-vios2 vfchost6 0x01 0x0000
-----------------------------------------------------
NPIV creation and how they are related together:
FCS0: Physical FC Adapter installed on the VIOS
VFCHOST0: Virtual FC (Server) Adapter on VIOS
FCS0 (on client): Virtual FC adapter on VIO client
Creating NPIV adapters:
0. install physical FC Adapters to VIO Servrs
1. HMC -> VIO Server -> DLPAR -> Virtual Adapter (don't forget profile (save current))
2. HMC -> VIO Client -> DLPAR -> Virtual Adapter (the ids should be mapped, don't forget profile)
3. cfgdev (VIO server), cfgmgr (client) <--it will bring up the new adapter vfchostX on vio server, fcsX on client
4. check status:
lsdev -dev vfchost* <--lists virtual FC server adapters
lsmap -vadapter vfchost0 -npiv <--gives more detail about the specified virtual FC server adapter
lsdev -dev fcs* <--lists physical FC server adapters
lsnports <--checks NPIV readiness (fabric=1 means npiv ready)
5. vfcmap -vadapter vfchost0 -fcp fcs0 <--mapping the virtual FC adapter to the VIO's physical FC
6. lsmap -all -npiv <--checks the maping
7. HMC -> VIO Client -> get the WWN of the adapter <--if no LPM will be used only the first WWN is needed
8. SAN zoning
-----------------------------------------------------
Checking if VIOS FC Adapter supports NPIV:
On VIOS as padmin:
$ lsnports
name physloc fabric tports aports swwpns awwpns
fcs0 U78C0.001.DAJX633-P2-C2-T1 1 64 64 2048 2032
fcs1 U78C0.001.DAJX633-P2-C2-T2 1 64 64 2048 2032
fcs2 U78C0.001.DAJX634-P2-C2-T1 1 64 64 2048 2032
fcs3 U78C0.001.DAJX634-P2-C2-T2 1 64 64 2048 2032
value in column fabric:
1 - adapter and the SAN switch is NPIV ready
2 - adapter or SAN switch is not NPIV ready and SAN switch configuration should be checked
-----------------------------------------------------
Getting WWPNs from HMC CLI:
lshwres -r virtualio --rsubtype fc --level lpar -m <Man. Sys.> -F lpar_name,wwpns --header --filter lpar_names=<lpar name>
lpar_name,wwpns
bb_lpar,"c05076066e590016,c05076066e590017"
bb_lpar,"c05076066e590014,c05076066e590015"
bb_lpar,"c05076066e590012,c05076066e590013"
bb_lpar,"c05076066e590010,c05076066e590011"
-----------------------------------------------------
1. identify the adapter
$ lsdev -dev fcs4 -child
name status description
fcnet4 Defined Fibre Channel Network Protocol Device
fscsi4 Available FC SCSI I/O Controller Protocol Device
2. unconfigure the mappings
$ rmdev -dev vfchost0 -ucfg
vfchost0 Defined
3. FC adapters and their child devices must be unconfigured or deleted
$ rmdev -dev fcs4 -recursive -ucfg
fscsi4 Defined
fcnet4 Defined
fcs4 Defined
4. diagmenu
DIAGNOSTIC OPERATING INSTRUCTIONS -> Task Selection -> Hot Plug Task -> PCI Hot Plug Manager -> Replace/Remove a PCI Hot Plug Adapter.
-----------------------------------------------------
Changing WWPN number:
There are 2 methods: changing dynamically (chhwres) or changing in the profile (chsyscfg). Both of them are similar and both of them done in HMC CLI.
I. Changing dynamically:
1. get current adapter config:
# lshwres -r virtualio --rsubtype fc -m <man. sys.> --level lpar | grep <LPAR name>
lpar_name=aix_lpar_01,lpar_id=14,slot_num=8,adapter_type=client,state=1,is_required=0,remote_lpar_id=1,remote_lpar_name=aix_vios1,remote_slot_num=123,"wwpns=c0507603a42102d8,c0507603a42102d9"
2. remove adapter from client LPAR: rmdev -Rdl fcsX (if needed unmanage device prior from storage driver)
3. remove adapter dynamically from HMC (it can be done in GUI)
4. create new adapter with new WWPNS dynamically:
# chhwres -r virtualio -m -o a -p aix_lpar_01 --rsubtype fc -a "adapter_type=client,remote_lpar_name=aix_vios1,remote_slot_num=123,\"wwpns=c0507603a42102de,c0507603a42102df\"" -s 8
5. cfgmgr on client LPAR will bring up adapter with new WWPNs.
6. save actual config to profile (so next profile activation wil not bring back old WWPNs)
(vfc mapping removal did not needed in this case, if there are some problems try reconfig. that one as well at VIOS side)
-----------------------------------------------------
II. changing in the profile:
same as above just some commands are different:
get current config:
# lssyscfg -r prof -m <man. sys.> --filter lpar_names=aix_vios1
aix_lpar01: default:"""6/client/1/aix_vios1/5/c0507604ac560004,c0507604ac560005/1"",""7/client/1/aix_vios1/4/c0507604ac560018,c0507604ac560019/1"",""8/client2/aix_vios2/5/c0507604ac56001a,c0507604ac56001b/1"",""9/client/2/aix_vios2/4/c0507604ac56001c,c0507604ac56001d/1"""
create new adapters in the profile:
chsyscfg -m <man. sys.> -r prof -i 'name=default,lpar_id=5,"virtual_fc_adapters+=""7/client/1/aix-vios1/4/c0507604ac560006,c0507604ac560007/1"""'
-m - managed system
-r prof - profile will be changed
-i ' - attributes
name=default - name of the profile, which will be changed
lpar_id=5 - id of the client LPAR
7 - adapter id on client (slot id)
client - adapter type
1 - remote PLAR id (VIOS server LPAR id)
aix_vios1 - remote LPAR name (VIOS server name)
4 - remote slote number (adapter id on VIOS server)
WWPN - both WWPN numbers (separated with , )
1 - required or desired (1- required, 0- desired)
Here VFC unmapping was needed:
vfcmap -vadapter vfchost4 -fcp <--remove mapping
vfcmap -vadapter vfchost4 -fcp fcs2 <--create new mapping
-----------------------------------------------------
Virtual FC login to SAN:
When new LPAR with VFC has been created, before to see LUNs (to install AIX), for the first time VFC Adapter has to be logged in to SAN.
This can be done on HMC (above HMC V7 R7.3) with command chnportlogin
chnportlogin: it allows to allocate, log in and zone WWPNs before the client partition is activated.
On HMC:
1. lsnportlogin -m <man. sys> --filter lpar_ids=4 <-- list status of VFC adapters (lpar_id should be given)
lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=2,wwpn=c0507607d1150008,wwpn_status=0,logged_in=none,wwpn_status_reason=null
lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=2,wwpn=c0507607d1150009,wwpn_status=0,logged_in=none,wwpn_status_reason=null
lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=3,wwpn=c0507607d115000a,wwpn_status=0,logged_in=none,wwpn_status_reason=null
lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=3,wwpn=c0507607d115000b,wwpn_status=0,logged_in=none,wwpn_status_reason=null
The WWPN status. Possible values are:
0 - WWPN is not activated
1 - WWPN is activated
2 - WWPN status is unknown
2. chnportlogin -m <man. sys> --id 4 -o login <-- activate WWPNs (VFC logs in to SAN)
3. lsnportlogin -m <man. sys> --filter lpar_ids=4 <-- list status (it should be wwpn_status=1)
lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=2,wwpn=c0507607d1150008,wwpn_status=1,logged_in=vios,wwpn_status_reason=null
lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=2,wwpn=c0507607d1150009,wwpn_status=1,logged_in=vios,wwpn_status_reason=null
lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=3,wwpn=c0507607d115000a,wwpn_status=1,logged_in=vios,wwpn_status_reason=null
lpar_name=bb_lpar,lpar_id=4,profile_name=default,slot_num=3,wwpn=c0507607d115000b,wwpn_status=1,logged_in=vios,wwpn_status_reason=null
4. Storage team can do LUN assginment, after they finished, you can do logout:
chnportlogin -m <man. sys> --id 4 -o logout
-----------------------------------------------------
IOINFO
If HMC is below V7 R7.3 ioinfo can be used to cause VFC adapters to login to SAN.
ioinfo also can be used for debug purposes or to check if disk are available/which disk is boot disk
It can be reached from SMS menu with number: 8 (Open Firmware Prompt)
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
1 = SMS Menu 5 = Default Boot List
8 = Open Firmware Prompt 6 = Stored Boot List
Memory Keyboard Network SCSI Speaker ok
0 > ioinfo
!!! IOINFO: FOR IBM INTERNAL USE ONLY !!!
This tool gives you information about SCSI,IDE,SATA,SAS,and USB devices attached to the system
Select a tool from the following
1. SCSIINFO
2. IDEINFO
3. SATAINFO
4. SASINFO
5. USBINFO
6. FCINFO
7. VSCSIINFO
q - quit/exit
==> 6
Then choose VFC client device from list --> List Attached FC Devices (this will cause that VFC device to login to SAN)
After that on VIOS: lsmap -npiv ... will show LOGGED_IN
(to quit from "ioinfo" command "reset-all" will do a reboot of the LPAR)
-----------------------------------------------------
62 comments:
hi,
is there any way to find out WWPN of physical HBA for NPIV adapter from AIX client.
Regards,
Ritesh
I'm not aware of any commands, which will show WWPNs of separate systems in 1 shot.
I would log in to the VIO server and I would check there.
If you find some good solution, you can share with me :)
Balazs
On the HMC:
lssyscfg -r sys -F name |
while read M; do lshwres -r virtualio --rsubtype fc --level lpar -m $M -F lpar_name,wwpns|
sed 's/^/'$M,'/'
done
I'll check that... it sounds good, but let me be very-very-very precise, the original question was:"...from AIX client."
But honestly, I really appreciate your solution, thx !!! :)
hi,
above command from HMC will list the WWPN of virtual fiber card on LPARs only.
i am interested in finding wwpn of physical HBAs allocated in VIO and physical to virtual hba relation for each LPAR.
Thanks
Hi,
As far as I know WWPN of physical HBA's are not stored on HMC, so you have to log in to VIO server to find these out.
The vfcmap command only allows you to map one vfchost to a fcs fibre port. After you have done the mapping from a certain vfchost to a certain fcsX, does it still allow you to mapping another vfchost to the same fcsX?
Yes, you can map more vfchosts to the same fcsX. In this way you can virtualize one physical adapter (fcsX), and several LPARs can use it with these mappings.
Thanks so much. I cannot really find any documentation that directly talks about this. (My server has not arrived yet. ) Will you be able to point me to some IBM doc that talks about this? THANKS.
IBM Redbooks can be very useful. For example:
IBM PowerVM Virtualization Introduction and Configuration: http://www.redbooks.ibm.com/abstracts/sg247940.html
IBM PowerVM Virtualization Managing and Monitoring: http://www.redbooks.ibm.com/abstracts/sg247590.html
Hi,
What are the advantages and disadvantages of using NPIV over vscsi?
Regards,
Siva
Hi, with vscsi all the SAN storage is assigned to VIO server. The necessary storage driver is installed on VIO server only, the clients does not need any additional driver. Maintaining this storage driver is very simple, because it is installed only on VIO server, but on VIO server could have hundreds of disks, which make administration very complex on VIO server.
With NPIV, disks are assigned directly to VIO clients, VIO server does not know anything about these disks. Storage driver has to be installed on every client, so if you need to update storage driver, this must be done on every client.
I cannot tell you directly which one is better, both of them are good enough. One point which could be important, if you want to use the capabilities of you storage driver on your clients (load balancing or other special settings) then with NPIV you can achieve this.
Regards,
Balazs
Hi ,
I've four fcs on my both VIO servers which are as below.
fcs0
fcs1
fcs2
fcs3
One of my Client LPAR's vfchost is mapped to fcs2 on the both VIO servers.
I want to change the mapping on my 2nd VIO to use fcs3 instead of fcs2.
I've used the below command to do the changes.
#vfcmap -vadapter vfchost3 fcs
#vfcmap -vadapter vfchost3 fcs fcs3
This command works okay on VIO but when I login to client LPAR then there I can see the fscsi1 path used to fail for all hdisks which are comign from this VIO. I'ved tried to remvoe fscsi1 and ran cfgmge the fscsi is detected on Client LPAR but the path does not come up in Enabled State.
Can anybody help me with that ?
Hi,
correct command to unmap a vfchost from any physical fibre channel:
vfcmap -vadapter vfchost3 -fcp
Then correct command to map virtual fibre channel (vfchost3) to physical fibre channel(fcs3):
vfcmap -vadapter vfchost3 -fcp fcs3
Hi ,
I've tried that too but it didn't help.
Any other suggestion ?
Thanks,
plz upload documents about LPM!!!!!!
How to perform LPM of NPIV CLEINT?Detailed steps would be much appreciated
it is here: http://aix4admins.blogspot.hu/2013/04/live-partition-mobility-live-partition.html
please check "Steps needed for LPM" section here: http://aix4admins.blogspot.hu/2013/04/live-partition-mobility-live-partition.html
Thanks for the link!!But do we need to have storage guys zoning the wwpn at the destination side too?please update
If you use NPIV, the virtual FC client adapter will have 2 WWPN numbers.(You can check it on HMC GUI of the adapter properties.) Storage team should zone disks for these 2 WWPN numbers if you plan to use LPM. Zoning should be done for both numbers!(1st WWPN is for normal usage, 2nd WWPN is used at LPM.)
Hello,
I have added 2 Virtual Fiber card using DLPAR, but I forgot to save current profile. i have shutdown my lpar and activate LPAR from HMC and lost my NPIV virtual firber adapter. is there any method to recover old virtual HBA? is there any way to assing old WWPN to adapter?
I would get in contact with IBM support. (If you find out something, you can share with us.)
Hi,
I did have the same problem with 16pcs of NPIV connected hosts and it was all down to some communication device to the fabric. In my case sfwcomm7.
So you have to delete all child devices you have on the adapter in my case fcs4 (moved away from fcs3)
Note! There cant be anything else running on this device as we are going to delete it!
First unmap and update all vfchost0..15 from fcs3
for i in 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
do
vfcmap -vadapter vfchost$i -fcp
rmdev -dev vfchost$i -ucfg
done
Update configuration on fcs4 (Witch failed for me)
rmdev -dev fcs4 -ucfg
List child devices and delete them afterwards
lsdev -dev fcs4 -child
lsdev -dev fscsi4 -child
rmdev -dev sfwcomm7
rmdev -dev fscsi4
rmdev -dev fcnet4
Configure devices
cfgdev
Map to fcs4
for i in 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
do
vfcmap -vadapter vfchost$i -fcp fcs4
done
This solved the NPIV access for me.
/Michael Linde
Hi is there any way to find whether the fc adapter have nib capabule r not how do i know
Hi, what is "nib capabule"?
You can attempt to re-assign the old WWPNs to the vfc client adapter on your LPAR profile from HMC using the following command:
$ chsyscfg -r prof -m -i "name=,lpar_name=,\"virtual_fc_adapters=\"\"/client/////\"\""
Apologies, the editor took away a part of the command before while formatting it internally (the parts inside angular brackets):
$ chsyscfg -r prof -m [managed system] -i "name=[profile name],lpar_name=[lpar name],\"virtual_fc_adapters=\"\"[vfc_client_slot_num]/client/[vios_partition_id]/[vios_partition_name]/[vios_vfc_server_slot_num]/[Old_WWPNs]/[is_required_flag]\"\""
It is possible to dynamically remap a vfchost adapter to another physical NPIV capable Fibre Channel adapter (fcs#) using vfcmap command. The ability to do so depends on the AIX client and VIO server levels. It requires
Virtual I/O Server 2.1.2 or higher
AIX 5.3 TL 11 or higher (feature was introduced with IZ51404)
IZ33540 (5.3 TL 11)
IZ51404 (5.3 TL 12)
IZ33541 (6.1 TL 4)
IZ51405 (6.1 TL 5)
To verify if the APAR is installed, ran
# instfix -ik IZ#####
In the following example, we are unmapping vfchost0 from fcs0 to fcs1 (while the client is up and running):
To unmap, ran vfcmap -vadapter vfchost0 -fcp
To remap, ran vfcmap -vadapter vfchost0 -fcp fcs1
Hi, thanks a lot, good description!
-Balazs
hscroot@localhost:~> lssyscfg -r sys -F name |while read M;
> do
> lshwres -r virtualio --rsubtype fc --level lpar -m $M -F lpar_name,wwpns|sed 's/^/'$M,'/'
> done
ServerG-9117-MMA-SN066FAE4,No results were found.
ServerA-9117-MMA-SN0670A54,No results were found.
ServerB-9117-MMA-SN066FE74,No results were found.
ServerD-9117-MMA-SN066FF94,No results were found.
ServerC-9117-MMA-SN066FEC4,No results were found.
is there any way to mirgrate vscsi to fscsi ?
I mean virtual scsi to NPIV ?
I know both are diffrent tech.
Yes, you configure NPIV with additional LUNs, then make a mirror to new disks and after remove old disks.
how to find wich vio serve the fc--- adapter for client from client level
Try the following:
# echo "vfcs" | kdb
for this following command in NPIV enviroment not working , can you help me on that : mkvdev -fbo -vadapter vhost0 , which is
mkvdev -fbo -vadapter vfchost0
but it is not working, do you have any idea, i am trying to create virtual optical file based device for a VFC host,
Try:
echo "vfcs" | kdb
br//Roger
Hello Good day!
Do you think if we have mutilpathing software installed at NPIV vio end will be use ful in managing FCS? if yes please update why
How do we see which pair of NPIVs you are using for an LPAR ?
how to find 2nd wwpn number
Hi, on HMC, check adapter properties.
What is the draw back of NPIV? What my guess says is we need to rezone all disks again after replacement failed card. isn't?
NIB is Network interface backup. which we normally user in LPAR client to avoid single point of failure( with NIC)
Hi
I have faulty FC adapter due to a lot of IO errors on some LPARs. I replaced the adapter (by just removing the adapter and put new one from stock) and noticed same WWPN allocated to all LPARS, and since my LPARs boot from LUNs all booted properly.
My question is - WWPN should change after HBA card replcement? or since HMC generate it through hypervisor, nothing should be changed?
Hi
I need to replace FC adapter with NPIV config for 10 LPARs, the question is -- the Physical WWPN change, but virtual WWPN change?. I have to update zonning? or the replacement procedure cannot change my configuration for NPIV.
Thanks for your comments
It should not change. Just unmap fsc and vfchost on VIO server
On Client - unmanage fcs0 from mulitpath driver (if any, like powerpath, SDDPCM, HDLM)
rmdev -l fcs0
On VIO server - vfcmap -vadapter vfchost0 -fcp # unmap fcs to vfchost
rmdev -l fcs4 # puts the adapter to defined state
diag > Task Selection > Hot Plug Task > PCI Hot Plug Manager > Replace/Remove a PCI Hot Plug Adapter
cfgmgr
vfcmap -vadapter vfchost0 -fcp fcs4
On Client - cfgmgr
put it to be managed by multipath software (if any)
Hope this helps :)
What is the maximum LUN number assigned to NPIV VIO Client?
I believe there is a vfsc limitation of 256 LUNs, but could anyone confirm it?
Hi,
After a switch replacement change, my Fabric A NPIV connections have been lost for 100's of LPARs to VIOS. Even if I remove the vfcmap and re-add, still the link shows as NOT LOGGED IN in vios lsmap -npiv output. How can I resolve it? It is an issue big time with so many servers losing redundancy.
just a guess, on HMC you can try lsnportlogin/chnportlogin.
Is this going to change the WWPN ?
In dual npiv which wwpn no we can share with san team can you explain
In dual npiv which wwpn no we can share with san team can you explain
Hi, can we change a path from vscsi to NPIV while lpar is running?
Hi,
I am getting below error for newly created NPIV on HMC -
lsnportlogin -m Server-8286-42A-SN78XXX --filter lpar_ids=6
lpar_name=ZLIUX0023-Pre-Prod,lpar_id=6,profile_name=normal,slot_num=77,wwpn=c0507609e0c70036,wwpn_status=2,logged_in=unknown,wwpn_status_reason=Adapter is busy
lpar_name=ZLIUX0023-Pre-Prod,lpar_id=6,profile_name=normal,slot_num=77,wwpn=c0507609e0c70037,wwpn_status=2,logged_in=unknown,wwpn_status_reason=Adapter is busy
lpar_name=ZLIUX0023-Pre-Prod,lpar_id=6,profile_name=normal,slot_num=74,wwpn=c0507609e0c70030,wwpn_status=2,logged_in=unknown,wwpn_status_reason=The adapter is not in the correct state for this operation
We are getting following status -
wwpn_status=2,logged_in=unknown
wwpn_status_reason=The adapter is not in the correct state for this operation or asapter busy.
How we can resolve this error .I am configuring NPIV first time on my system.
How do we get the WWN of a disk which is in defined state?
Hi,
how to check this kind of error.
# errpt
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
95A6D9B9 0605163120 T S vfchost19 Virtual FC Host Adapter detected an erro
---------------------------------------------------------------------------
LABEL: VIOS_VFC_HOST
IDENTIFIER: 95A6D9B9
Date/Time: Fri Jun 5 16:41:53 PDT 2020
Sequence Number: 14958510
Machine Id: 00CE9EA74C00
Node Id: rstgvio4
Class: S
Type: TEMP
WPAR: Global
Resource Name: vfchost19
Description
Virtual FC Host Adapter detected an error
Probable Causes
Virtual FC Host Adapter Driver is an Undefined State
Failure Causes
Virtual FC Host Adapter Driver is an Undefined State
Recommended Actions
Remove Virtual FC Host Adapter Instance, then Configure the same instance
Detail Data
ERNUM
0000 00A9
ABSTRACT
npiv_admin failed
AREA
FC driver
BUILD INFO
BLD: 1609 06-10:30:05 k2016_36A0
LOCATION
Filename:npiv_mads.c Function:npiv_port_login_proc Line:2450
DATA
rc = 0xEEEE000085358017
-------------------------------------------------------------------------
im getting confuse on what is the flow on how to check this? the said vfchost is available.
# lsdev -Cc adapter | grep vfchost19
vfchost19 Available Virtual FC Server Adapter
#
im not seeing any issue, but the said error keeps on popping.
Name Physloc ClntID ClntName ClntOS
------------- ---------------------------------- ------ -------------- -------
vfchost18 U9119.MME.84E9EA7-V4-C418 49 rqacbmr12 AIX
Status:LOGGED_IN
FC name:fcs4 FC loc code:U78CD.001.FZHC750-P1-C4-T1
Ports logged in:5
Flags:a
VFC client name:fcs1 VFC client DRC:U9119.MME.84E9EA7-V49-C418
Hi, errpt shows it is a TEMP error, so at that specific time there should have been some problem, but as you said for you it seems everything is OK now. Probably you could ask Storage guys if they see something at SAN side at that specific time and if it is re-occuring then monitoring closely LPAR and VIO could maybe reveal some peak/hiccup at CPU/Memory/IO side.
This could be some code level issue at FC adapters also.
I am not able to remove vfchost , please help
Some error messages may contain invalid information
for the Virtual I/O Server environment.
Method error (/etc/methods/ucfgdevice):
0514-062 Cannot perform the requested function because the
specified device is busy.
Unable to remove Virtual Target Device(VTD) "%s".
Hi, i have recently restored 1 AIX server through Mksysb, i have 2 vio setup and this perticular lpar is getting 6 fcs first 2 are in defined state, what will be the possible reason and how would i correct it coz i want it like fcs0 1 2 3
Hi team,
Is there any disadvantage in performance when running PowerVM LPARs with NPIV connected through direct attach to a FlashSystem (FS5035 for example).
Hi! How do I determine what are the "Ports Logged In: #"? For background, 1 virtual FC is mapped to my LPAR each from VIOS1 and VIOS2. All along what I understand by that "Ports Logged In:" is the number of paths to the LPAR. Currently my LPAR states that there are 3 logged in ports, but my co-worker said it should only be 2. Now I am confused on what "Ports Logged In:" really mean. We're trying to figure out what "ports" are logged in. LPAR is configured with LPM and VMRM by the way. I was thinking that the 3rd port might be because of LPM. Please enlighten me regarding this concept. :(
Post a Comment