VIRTUAL SCSI
Virtual SCSI is based on a client/server relationship. The Virtual I/O Server owns the physical resources and acts as server or, in SCSI terms, target device. The client logical partitions access the virtual SCSI backing storage devices provided by the Virtual I/O Server as clients.
Virtual SCSI server adapters can be created only in Virtual I/O Server. For HMC-managed systems, virtual SCSI adapters are created and assigned to logical partitions using partition profiles.
The vhost SCSI adapter is the same as a normal SCSI adapter. You can have multiple disks assigned to it. Usually one virtual SCSI server adapter mapped to one virtual SCSI client adapter will be configured, mapping backing devices through to individual LPARs. It is possible to map these virtual SCSI server adapters to multiple LPARs, which is useful for creating virtual optical and/or tape devices, allowing removable media devices to be shared between multiple client partitions.
on VIO server:
root@vios1: / # lsdev -Cc adapter
vhost0 Available Virtual SCSI Server Adapter
vhost1 Available Virtual SCSI Server Adapter
vhost2 Available Virtual SCSI Server Adapter
The client partition accesses its assigned disks through a virtual SCSI client adapter. The virtual SCSI client adapter sees the disks, logical volumes or file-backed storage through this virtual adapter as virtual SCSI disk devices.
on VIO client:
root@aix21: / # lsdev -Cc adapter
vscsi0 Available Virtual SCSI Client Adapter
root@aix21: / # lscfg -vpl hdisk2
hdisk2 U9117.MMA.06B5641-V6-C13-T1-L890000000000 Virtual SCSI Disk Drive
In SCSI terms:
virtual SCSI server adapter: target
virtual SCSI client adapter: initiator
(Analogous to server client model, where client is the initiator.)
Physical disks presented to the Virtual I/O Server can be exported and assigned to a client partition in a number of different ways:
- The entire disk is presented to the client partition.
- The disk is divided into several logical volumes, which can be presented to a single client or multiple different clients.
- With the introduction of Virtual I/O Server 1.5, files can be created on these disks and file-backed storage can be created.
- With the introduction of Virtual I/O Server 2.2 Fixpack 24 Service Pack 1 logical units from a shared storage pool can be created.
The IVM and HMC environments present 2 different interfaces for storage management under different names. Storage Pool interface under IVM is essentially the same as LVM under HMC. (These are used sometimes interchangeably.) So volume group can refer to both volume groups and storage pools, and logical volume can refer to both logical volumes and storage pool backing devices.
Once these virtual SCSI server/client adapter connections have been set up, one or more backing devices (whole disks, logical volumes or files) can be presented using the same virtual SCSI adapter.
When using Live Partition Mobility storage needs to be assigned to the Virtual I/O Servers on the target server.
----------------------------
Number of LUNs attached to a VSCSI adapter:
VSCSI adapters have a fixed queue depth that varies depending on how many VSCSI LUNs are configured for the adapter. There are 512 command elements of which 2 are used by the adapter, 3 are reserved for each VSCSI LUN for error recovery and the rest are used for IO requests. Thus, with the default queue_depth of 3 for VSCSI LUNs, that allows for up to 85 LUNs to use an adapter: (512 - 2) / (3 + 3) = 85.
So if we need higher queue depths for the devices, then the number of LUNs per adapter is reduced. E.G., if we want to use a queue_depth of 25, that allows 510/28= 18 LUNs. We can configure multiple VSCSI adapters to handle many LUNs with high queue depths, each requiring additional memory. One may have more than one VSCSI adapter on a VIOC connected to the same VIOS if you need more bandwidth.
Also, one should set the queue_depth attribute on the VIOC's hdisk to match that of the mapped hdisk's queue_depth on the VIOS.
Note that to change the queue_depth on an hdisk at the VIOS requires that we unmap the disk from the VIOC and remap it back, or a simpler approach is to change the values in the ODM (e.g. # chdev -l hdisk30 -a queue_depth=20 -P) then reboot the VIOS.
----------------------------
File Backed Virtual SCSI Devices
Virtual I/O Server (VIOS) version 1.5 introduced file-backed virtual SCSI devices. These virtual SCSI devices serve as disks or optical media devices for clients.
In the case of file-backed virtual disks, clients are presented with a file from the VIOS that it accesses as a SCSI disk. With file-backed virtual optical devices, you can store, install and back up media on the VIOS, and make it available to clients.
----------------------------
Check VSCSI adapter mapping on client:
root@bb_lpar: / # echo "cvai" | kdb | grep vscsi <--cvai is a kdb subcommand
read vscsi_scsi_ptrs OK, ptr = 0xF1000000C01A83C0
vscsi0 0x000007 0x0000000000 0x0 aix-vios1->vhost2 <--shows which vhost is used on which vio server for this client
vscsi1 0x000007 0x0000000000 0x0 aix-vios1->vhost1
vscsi2 0x000007 0x0000000000 0x0 aix-vios2->vhost2
Checking for a specific vscsi adapter (vscsi0):
root@bb_lpar: /root # echo "cvscsi\ncvai vscsi0"| kdb |grep -E "vhost|part_name"
priv_cap: 0x1 host_capability: 0x0 host_name: vhost2 host_location:
host part_number: 0x1 os_type: 0x3 host part_name: aix-vios1
----------------------------
Other way to find out VSCSI and VHOST adapter mapping:
If the whole disk is assigned to a VIO client, then PVID can be used to trace back connection between VIO server and VIO client.
1. root@bb_lpar: /root # lspv | grep hdisk0 <--check pvid of the disk is question on client
hdisk0 00080e82a84a5c2a rootvg
2. padmin@bb_vios1: /home/padmin # lspv | grep 5c2a <--check which disk has this pvid on vio server
hdiskpower21 00080e82a84a5c2a None
3. padmin@bb_vios1: /home/padmin # lsmap -all -field SVSA "Backing Device" VTD "Client Partition ID" Status -fmt ":" | grep hdiskpower21
vhost13:0x0000000c:hdiskpower21:pid12_vtd0:Available <--check vhost adapter of the given disk
----------------------------
Managing VSCSI devices (server-client mapping)
We need a server adapter on VIO (vhost) and a client adapter on the LPAR (vscsi) with correct config (pairing the server and client adapter together) and after from VIO a disk, an lv or a virtual optical device can be assigned to the LPAR as a vscsi device.
3 ways to create adapters:
- HMC Enhanced GUI:
this is the easiest and recommended way. First choose the LPAR and then on the left menu Virtual Storage. After it is possible to add a Physical Volume, or SSP Volume or LV, and during this operation all adapters and pairing will be created automatically by HMC.
- changing LPAR/VIO profile on HMC GUI:
It is possible to add these adapters in the profile, but in this case the pairing should be carefully done. Make sure to use the correct IDs both on VIO and LPAR side. After activating the profile adapters should pop up in VIO and LPAR side as well.
- HMC CLI:
With HMC commands it is possible to create these virtual adapters but in this case pairing should be carefully done (Making sure correct IDs are used.)
After VIO and LPAR pairing is done disk/lv can be assigned using VIO commands:
lsmap -all <--first check which vhost adapter will be needed in later commands
-using physical disks:
mkvdev -vdev hdisk34 -vadapter vhost0 -dev vclient_disk <--for easier identification useful to give a name with the -dev flag
rmvdev -vdev <backing dev.> <--back. dev can be checked with lsmap -all (here vclient_disk)
-using logical volumes:
mkvg -vg testvg_vios hdisk34 <--creating vg for lv
lsvg <--listing a vg
reducevg <vg> <disk> <--deleting a vg
mklv -lv testlv_client testvg_vios 10G <--creating lv what will be mapped to client
lsvg -lv <vg> <--lists lvs under a vg
rmlv <lv> <--removes an lv
mkvdev -vdev testlv_client -vadapter vhost0 -dev <any_name> <--for easier identification give a name with the -dev flag
(here backing device is an lv (testlv_client)
rmvdev -vdev <back. dev.> <--removes an assignment to the client
-using logical volumes just with storage pool commands:
(vg=sp, lv=bd)
mksp <vgname> <disk> <--creating a vg (sp)
lssp <--listing stoarge pools (vgs)
chsp -add -sp <sp> PhysicalVolume <--adding a disk to the sp (vg)
chsp -rm -sp bb_sp hdisk2 <--removing hdisk2 from bb_sp (storage pool)
mkbdsp -bd <lv> -sp <vg> 10G <--creates an lv with given size in the sp
lssp -bd -sp <vg> <--lists lvs in the given vg (sp)
rmbdsp -bd <lv> -sp <vg> <--removes an lv from the given vg (sp)
mkvdev..., rmvdev... also applies
-using file backed storage pool
first a normal (LV) storage pool should be created with: mkvg or mksp, after that:
mksp -fb <fb sp name> -sp <vg> -size 20G <--creates a file backed storage pool with given storage pool and size
(it wil look like an lv, and a fs will be created automatically as well)
lssp <--it will show as FBPOOL
chsp -add -sp clientData -size 1G <--increase the size of the file storage pool (ClientData) by 1G
mkbdsp -sp fb_testvg -bd fb_bb -vadapter vhost2 10G <--it will create a file backed device and assigns it to the given vhost
mkbdsp -sp fb_testvg -bd fb_bb1 -vadapter vhost2 -tn balazs 8G <--it will also specify a virt. target device name (-tn)
lssp -bd -sp fb_testvg <--lists the lvs (backing devices) of the given sp
rmbdsp -sp fb_testvg -bd fb_bb1 <--removes the given lv (bd) from the sp
rmsp <file sp name> <--remove s the given file storage pool
removing it:
rmdev -dev vhost1 -recursive
----------------------------
On client partitions, MPIO for virtual SCSI devices currently only support failover mode (which means only one path is active at a time:
root@bb_lpar: / # lsattr -El hdisk0
PCM PCM/friend/vscsi Path Control Module False
algorithm fail_over Algorithm True
----------------------------
Multipathing with dual VIO config:
on VIO SERVER:
# lsdev -dev <hdisk_name> -attr <--checking disk attributes
# lsdev -dev <fscsi_name> -attr <--checking FC attributes
# chdev -dev fscsi0 -attr fc_err_recov=fast_fail dyntrk=yes-perm <--reboot is needed for these
fc_err_recov=fast_fail <--in case of a link event IO will fail immediately
dyntrk=yes <--allows the VIO server to tolerate cabling changes in the SAN
# chdev -dev hdisk3 -attr reserve_policy=no_reserve <--each disk must be set to no_reservr
reserve_policy=no_reserve <--if this is configured, dual vio server can present a disk to client
on VIO client:
# chdev -l vscsi0 -a vscsi_path_to=30 -a vscsi_err_recov=fast_fail -P <--path timout checks health of VIOS and detects if VIO Server adapter isn't responding
vscsi_path_to=30 <--by default it is disabled (0), each client adapter must be configured, minimum is 30
vscsi_err_recov=fast_fail <--failover will happen immediately rather than delayed
# chdev -l hdisk0 -a queue_depth=20 -P <--it must match the queue depth value used for the physical disk on the VIO Server
queue_depth <--it determines how many requests will be queued on the disk
# chdev -l hdisk0 -a hcheck_interval=60 -a hcheck_mode=nonactive -P <--health check updates automatically paths state
(otherwise failed path must be set manually))
hcheck_interval=60 <--how often do hcheck, each disk must be configured (hcheck_interval=0 means it is disabled)
hcheck_mode=nonactive <--hcheck is performed on nonactive paths (paths with no active IO)
Never set the hcheck_interval lower than the read/write timeout value of the underlying physical disk on the Virtual I/O Server. Otherwise, an error detected by the Fibre Channel adapter causes new healthcheck requests to be sent before the running requests time out.
The minimum recommended value for the hcheck_interval attribute is 60 for both Virtual I/O and non Virtual I/O configurations.
In the event of adapter or path issues, setting the hcheck_interval too low can cause severe performance degradation or possibly cause I/O hangs.
It is best not to configure more than 4 to 8 paths per LUN (to avoid too many hchecks IO), and set the hcheck_interval to 60 in the client partition and on the Virtual I/O Server.
----------------------------
TESTING PATH PRIORITIES:
By default all the paths are defined with priority 1 meaning that traffic will go through the first path.
If you want to control the paths 'path priority' has to be updated.
Priority of the VSCSI0 path remains at 1, so it is the primary path.
Priority of the VSCSI1 path will be changed to 2, so it will be lower priority.
PREPARATION ON CLIENT:
# lsattr -El hdisk1 | grep hcheck
hcheck_cmd test_unit_rdy <--hcheck is configured, so path should come back automatically from failed state
hcheck_interval 60
hcheck_mode nonactive
# chpath -l hdisk1 -p vscsi1 -a priority=2 <--I changed priority=2 on vscsi1 (by default both paths are priority=1)
# lspath -AHE -l hdisk1 -p vscsi0
priority 1 Priority True
# lspath -AHE -l hdisk1 -p vscsi1
priority 2 Priority True
So, configuration looks like this:
VIOS1 -> vscsi0 -> priority 1
VIOS2 -> vscsi1 -> priority 2
TEST 1:
1. ON VIOS2: # lsmap -all <--checking disk mapping on VIOS2
VTD testdisk
Status Available
LUN 0x8200000000000000
Backing device hdiskpower1
...
2. ON VIOS2: # rmdev -dev testdisk <--removing disk mapping from VIOS2
3. ON CLIENT: # lspath
Enabled hdisk1 vscsi0
Failed hdisk1 vscsi1 <--it will show failed path on vscsi2 (this is coming from VIOS2)
4. ON CLIENT: # errpt <--error report will show "PATh HAS FAILED"
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
DE3B8540 0324120813 P H hdisk1 PATH HAS FAILED
5. ON VIOS2: # mkvdev -vdev hdiskpower1 -vadapter vhost0 -dev testdisk <--configure back disk mapping from VIOS2
6. ON CLIENT: # lspath <--in 30 seconds path will come back automatically
Enabled hdisk1 vscsi0
Enabled hdisk1 vscsi1 <--because of hcheck, path came back automatically (no manual action was needed)
7. ON CLIENT: # errpt <--error report will show path has been recovered
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
F31FFAC3 0324121213 I H hdisk1 PATH HAS RECOVERED
TEST 2:
I did the same on VIOS1 (rmdev...disk, which has path priority 1 (IO is going there by default)
ON CLIENT: # lspath
Failed hdisk1 vscsi0
Enabled hdisk1 vscsi1
ON CLIENT: # errpt <--an additional disk operation error will be in errpt
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
DCB47997 0324121513 T H hdisk1 DISK OPERATION ERROR
DE3B8540 0324121513 P H hdisk1 PATH HAS FAILED
----------------------------
How to change a VSCSI adapter on client:
# lspath
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi2 <--we want to change vsci2 to vscsi1
On VIO client:
1. # rmpath -p vscsi2 -d <--remove paths from vscsi2 adapter
2. # rmdev -dl vscsi2 <--remove adapter
On VIO server:
3. # lsmap -all <--check assignment and vhost device
4. # rmdev -dev vhost0 -recursive <--remove assignment and vhost device
On HMC:
5. Remove deleted adapter from client (from profil too)
6. Remove deleted adapter from VIOS (from profil too)
7. Create new adapter on client (in profil too) <--cfgmgr on client
8. Create new adapter on VIOS (in profil too) <-cfgdev on VIO server
On VIO server:
9. # mkvdev -vdev hdiskpower0 -vadapter vhost0 -dev rootvg_hdisk0 <--create new assignment
# lspath
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi1 <--vscsi1 is there (cfgmgr may needed)
----------------------------
Assigning and moving DVD RAM between LPARS
1. lsdev -type optical <--check if VIOS owns optical device (you should see sg. like: cd0 Available SATA DVD-RAM Drive)
2. lsmap -all <--to see if cd0 is already mapped and which vhost to use for assignment (lsmap -all | grep cd0)
3. mkvdev -vdev cd0 -vadapter vhost0 <--it will create vtoptX as a virtual target device (check with lsmap -all )
4. cfgmgr (on client lpar) <--bring up cd0 device on client (before moving cd0 device rmdev device on client first)
5. rmdev -dev vtopt0 -recursive <--to move cd0 to another client, remove assignment from vhost0
6. mkvdev -vdev cd0 -vadapter vhost1 <--create new assignment to vhost1
7. cfgmgr (on other client lpar) <--bring up cd0 device on other client
(Because VIO server adapter is configured with "Any client partition can connect" option, these pairs are not suited for client disks.)
----------------------------
46 comments:
awesome explanation of this mpio concept, easy to understand,thanks
Hi
What is the Default maximum virtual adapters in VIO?
Hi,
Does the multiple VTDs can be mapped to single client vhost adapter
I don't know, but you can change it anytime easily to 65535
Yes, this is the way if you want to give more disks to 1 LPAR (client)
Hi, I am new to AIX and want to learn a fresh about Virtualization in AIX. Please suggest the basics that i need to be aware of.
Hi, basics can be found in this redbook: http://www.redbooks.ibm.com/abstracts/sg247940.html
Hi,
Is there a precise way of finding the exact disk supplied from Dual VIO server (EMC Disk hdiskpower) to client.
Commands On Client:
/home/root# lspv
hdisk0 00c20b45268903e5 rootvg active
hdisk1 00c20b452b9a74cc datavg active
/home/root# lscfg | grep hdisk
* hdisk1 U9119.FHA.0220B45-V23-C323-T1-L8200000000000000 Virtual SCSI Disk Drive
* hdisk0 U9119.FHA.0220B45-V23-C323-T1-L8100000000000000 Virtual SCSI Disk Drive
Can we know which LUN ID or hdiskpower disk is supplied from the VIO server which is presented as hdisk0 and hdisk1 on the client.
Regards
Amar
Hi,
Probably this will help:
on client:
# lscfg | grep hdisk
* hdisk9 U8204.E8A.0680EC2-V5-C4-T1-L8100000000000000 Virtual SCSI Disk Drive
* hdisk8 U8204.E8A.0680EC2-V5-C3-T1-L8100000000000000 Virtual SCSI Disk Drive
# lspath
Enabled hdisk8 vscsi0 <--we will check hdisk8 on vscsi0
Enabled hdisk8 vscsi1
# echo "cvai" | kdb | grep vscsi
...
vscsi0 0x000007 0x0000000000 0x0 xxx-vios1->vhost10 <--we can see on vios1 vhost10 is involved.
vscsi1 0x000007 0x0000000000 0x0 xxx-vios2->vhost10
go to vios1:
# lsmap -vadapter vhost10
SVSA Physloc Client Partition ID
--------------- -------------------------------------------- ------------------
vhost10 U8204.E8A.0680EC2-V1-C28 0x00000005
VTD pid5_vtd0
Status Available
LUN 0x8100000000000000 <--you can compare this LUN id with "lscfg" output on the client
Backing device hdiskpower13 <--you can see also hdiskpower name
...
Regards,
Balazs
if you have pvid and host VIO, then following will also work:
lspv | grep pvid <== This should give the associated hdiskpower
# lspv | head
hdisk0 00f646a28798b204 rootvg active
# lspv | grep 00f646a28798b204
hdiskpower22 00f646a28798b204 None
Through one vhost we can map 80 devices (VTDs) to one client.
Is that correct "aix".?
By default when you create a VIO server the Maximum Virtual Adapters is set to only 20.
In below link you can find more info...
Maximum Virtual Slots used up on your VIO server
Hi, sorry for late reply.
Yes, that is correct for a usual configuration. Some more info from PowerVM redbook:
"There are 512 command elements, of which 2 are used by the adapter, 3 are reserved for each VSCSI LUN for error recovery, and the rest are used for I/O requests. Thus, with the default queue depth of 3 for VSCSI
LUNs, that allows for up to 85 LUNs to use an adapter: (512 - 2) / (3 + 3) = 85 rounding down. If you need higher queue depths for the devices, the number of LUNs per adapter is reduced. For example, if you want to use a queue depth of 25, that allows 510/28 = 18 LUNs per adapter for an AIX client partition."
Hi,
Usage of "kdb" command is advisable? I came to know through few colleagues it will hung the server.
Regards,
Siva
Hi, I used several times "kdb" command, but it never hanged for me.
Hi Admin,
Checking vios and aix client VSCSI adapter mapping.I used below command.
echo "cvai" | kdb | grep vscsi -----> command is not working in client lpar with aix 5.3 tl 5,8,11,12.But, its executing in os 6.1.
Thanks
AR
OK, thanks for the update.
If in case a hdisk0 is failing in the server (under VIO) with location "U9117.MMA.06B5641-V6-C13-T1-L890000000000 Virtual SCSI Disk Drive". Is there a way we can check the physical location code of the failing disk? If yes then what would be the exact command?
Thanks
Pradeep
Hi, first keep in mind the last part of that location, in this case "...L890000000000". Then go to the VIO server, and as padmin give "lsmap -all". Identify your server (client partiion id) and there you will see the above number with the hdisk name and physical location.
While allocating Physical PV through vSCSI for MPIO, i forgot to assign the PVID on both the VIO Servers. Now the client are installed with AIX running fine with no PVID on VIOS Server, is thier any chance for outage during failover ? failover will happen or not ?.
I checked PowerVM documentation and it does not talk about, that PVID is necessary for correct function. I would say, if it is working OK, than it should be fine. If you can see both paths on client, then it is working correctly. PVID is mainly needed for LVM, but here you don't have any VGs or LVs on the disk. "The AIX LVM uses this number (PVID) to identify specific disks. When a volume group is created, the member devices are simply a list of PVIDs."
As you said, i can see both the paths on client side.
I have not assigned PVID on the both VIO servers and mapped the disk to clients (Virtual SCSI for MPIO).without any problem Client can able to see the disk, PVID assigned on it and seen both the paths too. I have installed AIX on it.
Here My question is MPIO failover will happen at VIO Server side during outage, during the case, can VIO Server able to find the proper matching PV's, since there is no PVID on it.
I think it will work, but to be sure, open an IBM call and they will tell you for sure.
hi,
Great blog indeed it has helped me a lot.
What is the maximum number of vscsi disks that can be assigned to an lpar? what would happen if we assign more than this max number?
Thanks!
AIX Guy
It depends on the vscsi adapter settings (queue depth): ""There are 512 command elements, of which 2 are used by the adapter, 3 are reserved for each VSCSI LUN for error recovery, and the rest are used for I/O requests. Thus, with the default queue depth of 3 for VSCSI LUNs, that allows for up to 85 LUNs to use an adapter: (512 - 2) / (3 + 3) = 85 rounding down. If you need higher queue depths for the devices, the number of LUNs per adapter is reduced. For example, if you want to use a queue depth of 25, that allows 510/28 = 18 LUNs per adapter for an AIX client partition." More info can be found in IBM PowerVM Redbooks.
Hi,
Can we map logical volumes and cdrom from server to client in this way this- "mkvdev -vdev lv0 -vadapter vhost0" and again a cdrom as "mkvdev -vdev cd0 -vadapter vhost0"
i.e. assign multiple lv's and cd0 (cdrom's) to only one vhost0.
Hi, yes it is possible. After "lsmap -all" will show what type of device is mapped to it.
Hi,
Thanks for reverting. We have created lpar and assigned LV based virtual disk to client. When the client is booted for installation of AIX5.3 the cdrom gets detected and starts installation, but reaching at following..
AIX Version 5.3
Starting NODE#000 physical CPU#001 as logical CPU#001...done
HMC report 0518 error code and installation get stuck....any idea why this could be happening?
We have mapped the lv and cdrom to single vhost0 adapter, here if the cdrom gets detected why the disk is not
gettings detected?
Hi, you can try to map cd0 to a different vhost adapter...to see if that is the problem. If you will get the same error, then the problem is somewhere else.
Hi,
Thanks once again for reverting. We found the solution, we changed the OS installation disk. This time we tried installing 6.1 instead 5.3 and it work well, only it took a lot more time as we have assigned only 0.10 cpu and 512 ram.
But, the 5.3 disk was fine, it worked well on other systems. I think if we would have hold for few more hours on the reference code 0518 window eventually it would have started the installation.
Thanks Balaz for such a wonderfull blog. Really appreciate your work.
Hi Balaz,How can we login to a lpar,Without HMC?
Thanks in advance!
Pratt
Hi Admin,
Can you please help me with the command to find the VIO of the NPIV client..
Thanks
echo "vfcs" | kdb
One of best blog......
Whether we use same cable for MPIO as normal disk Adapter?
Is thier any command available to find the VIO Server of an SEA from the client.
Hi in vio client i had added a pv and set the pvid , change the attributes like queue_depth, reserve_policy .finaly i had increase the existing filesystem will it work
In AIX what is the command to find lun id if it is HP Storage
vio servers how to find lun if it is hp storage
yes it will work , but you need to change upperbound value ,if require need to extenedlv
Solid state drive or SSD technology is the need of the hour. And, if you are looking for a company that can provide unmatchable SSD servers then contact SSD VPS right now.
Use following method to find vhost on VIO from client for AIX 5.3
1. kdb utility can be invoked, just executing 'kdb' in the AIX command prompt.
# kdb
You will receive a prompt like this:
0>
2. Load the cvscsi autoload function - this function is already loaded by default on AIX 6.1 systems. Hence we will not load any "cvscsi" function for AIX 6.1 servers( please refer the link mentioned above in this post).
0> cvscsi
Below will be the o/p:
read vscsi_scsi_ptrs OK, ptr = 0x3ABCD10
Autoload function /usr/lib/ras/autoload/cvscsi64.kdb was
successfully executed
3. Then we can check the vscsi adapter details, using the cvai function:
0)> cvai
If you want only for a particular vscsi, we can use - cvai vscsix
Below will be the o/p:
unit_id: 0x30000001 partition_num: 0x1 partition_name:
lparname(your partition name)
capability_level: 0x0 location_code:
priv_cap: 0x1 host_capability: 0x0 host_name: vhostx
host_location:
heart_beat_enabled: 0x1 sample_time: 0x1F
ping_response_time: 0x2D
host part_number: 0x2 os_type: 0x3 host part_name: VIONAME
......
O/P will be more descriptive, so in the above o/p, we just grep for vscsi and so we will get only the required details.
The required information is seen in the fields 'host_part_name' for the hostname of the VIO server serving this adapter and host_name for the associated vhost adapter.
Thanks a lot! Very good info!
I thought when you run cfgdev , the pv is already added in the ODM. What is the point of PVID to map the device into VSCSI. I will be curious to know what IBM has to say about this.
For few server I am getting below error.
Unable to find
Enter the vscsi_scsi_ptrs address (in hex): invalid expression
For few server I am getting below error.
Unable to find
Enter the vscsi_scsi_ptrs address (in hex): invalid expression
Hi, I want to change the state of vhost on VIOS to defined state for Powerpath upgrade, pls let me how to proceed.
Post a Comment