dropdown menu

VG

VOLUME GROUP
When you install a system, the first volume group (VG) is created. It is called the rootvg. Your rootvg volume group is a base set of logical volumes required to start the system. It includes paging space, the journal log, boot data, and dump storage, each on its own separate logical volume.

A normal VG is limited to 32512 physical partitions. (32 physical volumes, each with 1016 partitions)
you can change it with: chvg -t4 bbvg (the factor is 4, which means: maximum partitions:4064 (instead of 1016), max disks:8 (instead of 32))


How do I know if my volume group is normal, big, or scalable?
Run the lsvg command on the volume group and look at the value for MAX PVs. The value is 32 for normal, 128 for big, and 1024 for scalable volume group.


If a physical volume is part of a volume group, it contains 2 additional reserved areas. One area contains both the VGSA and the VGDA, and this area is started from the first 128 reserved sectors (blocks) on the disk. The other area is at the end of the disk, and is reserved as a relocation pool for bad blocks.

VGDA (Volume Group Descriptor Area)
It is an area on the hard disk (PV) that contains information about the entire volume group. There is at least one VGDA per physical volume, one or two copies per disk. It contains physical volume list (PVIDs), logical volume list (LVIDs), physical partition map (maps lps to pps)

# lqueryvg -tAp hdisk0                                <--look into the VGDA (-A:all info, -t: tagged, without it only numbers)
Max LVs:        256
PP Size:        27                                    <--exponent of 2:2 to 7=128MB
Free PPs:       698
LV count:       11
PV count:       2
Total VGDAs:    3
Conc Allowed:   0
MAX PPs per PV  2032
MAX PVs:        16
Quorum (disk):  0
Quorum (dd):    0
Auto Varyon ?:  1
Conc Autovaryo  0
Varied on Conc  0
Logical:        00cebffe00004c000000010363f50ac5.1   hd5 1       <--1: count of mirror copies (00cebff...c5 is the VGID)
                00cebffe00004c000000010363f50ac5.2   hd6 1
                00cebffe00004c000000010363f50ac5.3   hd8 1
                ...
Physical:       00cebffe63f500ee                2   0            <--2:VGDA count 0:code for its state (active, missing, removed)
                00cebffe63f50314                1   0            (The sum of VGDA count should be the same as the Total VGDAs)
Total PPs:      1092
LTG size:       128
...
Max PPs:        32512
-----------------------

VGSA (Volume Group Status Area)
The VGSA contains state information about physical partitions and physical volumes. For example, the VGSA knows if one physical volume in a volume group is unavailable and the state of all physical partitions in the volume group.

Both the Volume Group Descriptor Area and the Volume Group Status Area have beginning and ending time stamps that are very important. These time stamps enable the LVM to identify the most recent copy of the VGDA and the VGSA at vary on time. The LVM requires that the time stamps for the chosen VGDA be the same as those for the chosen VGSA.


Quorum
Non-rootvg volume groups can be taken offline and brought online by a process called varying on and varying off a volume group. The system checks the availability of all VGDAs for a particular volume group to determine if a volume group is going to be varied on or off.
When attempting to vary on a volume group, the system checks for a quorum of the VGDA to be available. A quorum is equal to 51 percent or more of the VGDAs available. If it can find a quorum, the VG will be varied on; if not, it will not make the volume group available.
Turning off the quorum does not allow a varyonvg without a quorum, it just prevents the closing of an active vg when losing its quorum. (so forced varyon may needed: varyonvg -f VGname)

After turning it off (chvg -Qn VGname) it is in effect immediately.


LTG
LTG is the maximum transfer size of a logical volume (volume group?).
At 5.3 and 6.1 AIX dynamically sets LTG size (calculated at each volume group activation).
LTG size can be changed with: varyonvg -M<LTG size>
(The chvg -L has no effect on volume groups created on 5.3 or later (it was used on 5.2)
To display the LTG size of a disk: /usr/sbin/lquerypv -M <hdisk#>


lsvg                      lists the volume groups that are on the system
lsvg -o                   lists all volume groups that are varied on
lsvg -o | lsvg -ip        lists pvs of online vgs
lsvg rootvg               gives details about the vg (lsvg -L <vgname>, will doe not wait for the lock release (useful during mirroring))
lsvg -l rootvg            info about all logical volumes that are part of a vg
lsvg -M rootvg            lists all PV, LV, PP deatils of a vg (PVname:PPnum LVname: LPnum :Copynum)
lsvg -p rootvg            display all physical volumes that are part of the volume group
lsvg -n <hdisk>           shows vg infos, but it is read from the VGDA on the specified disk (it is useful to compare it with different disks)

mkvg -s 2 -y testvg hdisk13    creates a volume group
    -s                    specify the physical partition size
    -y                    indicate the name of the new vg

chvg                      changes the characteristics of a volume group
chvg -u <vgname>          unlocks the volume group (if a command core dumping, or the system crashed and vg is left in locked state)
                          (Many LVM commands place a lock into the ODM to prevent other commands working on the same time.)
extendvg rootvg hdisk7    adds hdisk7 to rootvg (-f forces it: extendvg -f ...)
reducevg rootvg hdisk7    tries to delete hdisk7 (the vg must be varied on) (reducevg -f ... :force it)
                          (it will fail if the vg contains open logical volumes)
reducevg datavg <pvid>    reducevg command can use pvid as well (it is useful, if disk already removed from ODM, but VGDA still exist)


syncvg                    synchronizes stale physical partitions (varyonvg better, becaue first it reestablis reservation then sync in backg.)
varyonvg rootvg           makes the vg available (-f force a vg to be online if it does not have the quorum of available disks)
                          (varyonvg acts as a self-repair program for VGDAs, it does a syncvg as well)
varyoffvg rootvg          deactivate a volume group
mirrorvg -S P01vg hdisk1  mirroring rootvg to hdisk1 (checking: lsvg P01vg | grep STALE) (-S: background sync)
                          (mirrorvg -m rootvg hdisk1 <--m makes an exact copy, pp mapping will be identical, it is advised this way)
unmirrorvg testvg1 hdisk0 hdisk1 remove mirrors on the vg from the specified disks

exportvg avg              removes the VGs definition out of the ODM and /etc/filesystems (for ODM problems after importvg will fix it)
importvg -y avg hdisk8    makes the previously exported vg known to the system (hdisk8, is any disk belongs to the vg)

reorgvg                   rearranges physical partitions within the vg to conform with the placement policy (outer edge...) for the lv.
                          (For this 1 free pp is needed, and the relocatable flag for lvs must be set to 'y': chlv -r...)

getlvodm -j <hdisk>       get the vgid for the hdisk from the odm
getlvodm -t <vgid>        get the vg name for the vgid from the odm
getlvodm -v <vgname>      get the vgid for the vg name from the odm

getlvodm -p <hdisk>       get the pvid for the hdisk from the odm
getlvodm -g <pvid>        get the hdisk for the pvid from the odm
lqueryvg -tcAp <hdisk>    get all the vgid and pvid information for the vg from the vgda (directly from the disk)
                          (you can compare the disk with odm: getlvodm <-> lqueryvg)


synclvodm <vgname>        synchronizes or rebuilds the lvcb, the device configuration database, and the vgdas on the physical volumes
redefinevg                it helps regain the basic ODM informations if those are corrupted (redefinevg -d hdisk0 rootvg)
readvgda hdisk40          shows details from the disk

Physical Volume states (and quorum):
lsvg -p VGName            <--shows pv states (not devices states!)
    active                <--during varyonvg disk can be accessed
    missing               <--during varyonvg disk can not be accessed + quorum is available
                          (after disk repair varyonvg VGName will put in active state)
    removed               <--no disk access during varyonvg + quorum is not available --> you issue varyonvg -f VGName
                          (after force varyonvg in the above case, PV state will be removed, and it won't be used for quorum)
                          (to put back in active state, first we have to tell the system the failure is over:)
                          (chpv -va hdiskX, this defines the disk as active, and after that varyonvg will synchronize)


---------------------------------------

Mirroring rootvg (i.e after disk replacement):
1. disk replaced -> cfgmgr           <--it will find the new disk (i.e. hdisk1)
2. extendvg rootvg hdisk1            <--sometimes extendvg -f rootvg...
(3. chvg -Qn rootvg)                 <--only if quorum setting has not yet been disabled, because this needs a restart
4. mirrorvg -s rootvg                <--add mirror for rootvg (-s: synchronization will not be done)
5. syncvg -v rootvg                  <--synchronize the new copy (lsvg rootvg | grep STALE)
6. bosboot -a                        <--we changed the system so create boot image (-a: create complete boot image and device)
                                     (hd5 is mirrorred, no need to do it for each disk. ie. bosboot -ad hdisk0)
7. bootlist -m normal hdisk0 hdisk1  <--set normal bootlist
8. bootlist -m service hdisk0 hdisk1 <--set bootlist when we want to boot into service mode
(9. shutdown -Fr)                    <--this is needed if quorum has been disabled
10.bootinfo -b                       <--shows the disk  which was used for boot

---------------------------------------

Export/Import:
1. node A: umount all fs -> varyoffvg myvg
2. node A: exportvg myvg            <--ODM cleared
3. node B: importvg -y myvg hdisk3  <-- -y: vgname, if omitted a new vg will be created (if needed varyonvg -> mount fs)
if fs already exists:
    1. umount the old one and mount the imported one with this: mount -o log=/dev/loglv01 -V jfs2 /dev/lv24 /home/michael
    (these details have to added in the mount command, and these can be retreived from LVCB: getlvcb lv24 -At)

    2. vi /etc/filesystems, create a second stanza for the imported filesystems with a new mountpoint.

---------------------------------------

VG problems with ODM:

if varyoff possible:
1. write down VGs name, major number, a disk
2. exportvg VGname
3. importvg -V MajorNum -y VGname hdiskX

if varyoff not possible:
1. write down VGs name, major number, a disk
2. export the vg by the backdoor, using: odmdelete
3. re-import vg (may produce warnings, but works)
(it is not necessary to umount fs or stop processes)

---------------------------------------

Changing factor value (chvg -t) of a VG:

A normal or a big vg has the following limitations after creation:
MAX PPs per VG = MAX PVs * MAX PPS per PV)

                   Normal       Big
MAX PPs per VG:    32512        130048
MAX PPs per PV:    1016         1016
MAX PVs:           32           128

If we want to extend the vg with a disk, which is so large that we would have more than 1016 PPs on that disk, we will receive:
root@bb_lpar: / # extendvg bbvg hdisk4
0516-1162 extendvg: Warning, The Physical Partition Size of 4 requires the
        creation of 1024 partitions for hdisk4.  The limitation for volume group
        bbvg is 1016 physical partitions per physical volume.  Use chvg command
        with -t option to attempt to change the maximum Physical Partitions per
        Physical volume for this volume group.

If we change the factor value of the VG, then extendvg will be possible:
root@bb_lpar: / # chvg -t 2 bbvg
0516-1164 chvg: Volume group bbvg changed.  With given characteristics bbvg
        can include up to 16 physical volumes with 2032 physical partitions each.

Calculation:
Normal VG: 32/factor = new value of MAX PVs
Big VG: 128/factor= new value of MAX PVs

-t   PPs per PV        MAX PV (Normal)    MAX PV (Big)
1    1016              32                 128
2    2032              16                 64
3    3048              10                 42
4    4064              8                  32
5    5080              6                  25
...

"chvg -t" can be used online either increasing or decreasing the value of the factor.

---------------------------------------

Changing Normal VG to Big VG:

If you reached the MAX PV limit of a Normal VG and playing with the factor (chvg -t) is not possible anymore you can convert it to Big VG.
It is an online activity, but there must be free PPs on each physical volume, because VGDA will be expanded on all disks:

root@bb_lpar: / # lsvg -p bbvg
bbvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            511         2           02..00..00..00..00
hdisk3            active            511         23          00..00..00..00..23
hdisk4            active            1023        0           00..00..00..00..00

root@bb_lpar: / # chvg -B bbvg
0516-1214 chvg: Not enough free physical partitions exist on hdisk4 for the
        expansion of the volume group descriptor area.  Migrate/reorganize to free up
        2 partitions and run chvg again.

In this case we have to migrate 2 PPs from hdisk4 to hdsik3 (so 2 PPs will be freed up on hdisk4):

root@bb_lpar: / # lspv -M hdisk4
hdisk4:1        bblv:920
hdisk4:2        bblv:921

hdisk4:3        bblv:922
hdisk4:4        bblv:923
hdisk4:5        bblv:924
...

root@bb_lpar: / # lspv -M hdisk3
hdisk3:484      bblv:3040
hdisk3:485      bblv:3041
hdisk3:486      bblv:3042
hdisk3:487      bblv:1
hdisk3:488      bblv:2
hdisk3:489-511

root@bb_lpar: / # migratelp bblv/920 hdisk3/489
root@bb_lpar: / # migratelp bblv/921 hdisk3/490

root@bb_lpar: / # lsvg -p bbvg
bbvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            511         2           02..00..00..00..00
hdisk3            active            511         21          00..00..00..00..21
hdisk4            active            1023        2           02..00..00..00..00

If we try again changing to Big VG, now it is successful:
root@bb_lpar: / # chvg -B bbvg
0516-1216 chvg: Physical partitions are being migrated for volume group
        descriptor area expansion.  Please wait.
0516-1164 chvg: Volume group bbvg2 changed.  With given characteristics bbvg2
        can include up to 128 physical volumes with 1016 physical partitions each.

If you check again, freed up PPs has been used:
root@bb_lpar: / # lsvg -p bbvg
bbvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            509         0           00..00..00..00..00
hdisk3            active            509         17          00..00..00..00..17
hdisk4            active            1021        0           00..00..00..00..00

---------------------------------------

Changing Normal (or Big) VG to Scalable VG:

If you reached the MAX PV limit of a Normal or a Big VG and playing with the factor (chvg -t) is not possible anymore you can convert that VG to Scalable VG. A Scalable VG allows a maximum of 1024 PVs and 4096 LVs and a very big advantage that the maximum number of PPs applies to the entire VG and is no longer defined on a per disk basis.

!!!Converting to Scalable VG is an offline activity (varyoffvg), and there must be free PPs on each physical volume, because VGDA will be expanded on all disks.

root@bb_lpar: / # chvg -G bbvg
0516-1707 chvg: The volume group must be varied off during conversion to
        scalable volume group format.

root@bb_lpar: / # varyoffvg bbvg
root@bb_lpar: / # chvg -G bbvg
0516-1214 chvg: Not enough free physical partitions exist on hdisk2 for the
        expansion of the volume group descriptor area.  Migrate/reorganize to free up
        18 partitions and run chvg again.


After migrating some lps to free up required PPs (in this case it was 18), then changing to Scalable VG is successful:
root@bb_lpar: / # chvg -G bbvg
0516-1224 chvg: WARNING, once this operation is completed, volume group bbvg
        cannot be imported into AIX 5.2 or lower versions. Continue (y/n) ?
...
0516-1712 chvg: Volume group bbvg changed.  bbvg can include up to 1024 physical volumes with 2097152 total physical partitions in the volume group.

---------------------------------------

0516-008 varyonvg: LVM system call returned an unknown error code (2).
solution: export LDR_CNTRL=MAXDATA=0x80000000@DSA (check /etc/environment if LDR_CNTRL has a value, which is causing the trouble)

---------------------------------------

If VG cannot be created:
root@aix21c: / # mkvg -y tvg hdisk29
0516-1376 mkvg: Physical volume contains a VERITAS volume group.
0516-1397 mkvg: The physical volume hdisk29, will not be added to
the volume group.
0516-862 mkvg: Unable to create volume group.
root@aixc: / # chpv -C hdisk29        <--clears owning volume manager from a disk, after this mkvg was successful

 ---------------------------------------

root@aix1: /root # importvg -L testvg -n hdiskpower12
0516-022 : Illegal parameter or structure value.
0516-780 importvg: Unable to import volume group from hdiskpower12.


For me the solution was:
there was no pvid on the disk, after adding it (chdev -l hdiskpower12 -a pv=yes) it was OK
---------------------------------------


Reorgvg log files, and how it is working:

reorgvg activity is logged in lvmcfg:
root@bb_lpar: / # alog -ot lvmcfg | tail -3
[S 17039512 6750244 10/23/11-12:39:05:781 reorgvg.sh 580] reorgvg bbvg bb1lv
[S 7405650 17039512 10/23/11-12:39:06:689 migfix.c 168] migfix /tmp/.workdir.9699494.17039512_1/freemap17039512 /tmp/.workdir.9699494.17039512_1/migrate17039512 /tmp/.workdir.9699494.17039512_1/lvm_moves17039512
[E 17039512 47:320 reorgvg.sh 23] reorgvg: exited with rc=0

Field of these lines:
S - Start, E - End; PID, PPID; TIMESTAMP

At E (end) line shows how long reorgvg was running (in second:milliseconds):
47:320 = 47s 320ms


for a long running reorgvg, you can see it's status:

1. check the working dir of reorgvg
root@aixdb2: /root # alog -ot lvmcfg | tail -3 | grep workdir
[S 5226546 5288122 10/22/11-13:55:11:001 migfix.c 165] migfix /tmp/.workdir.4935912.5288122_1/freemap5288122 /tmp/.workdir.4935912.5288122_1/migrate5288122 /tmp/.workdir.4935912.5288122_1/lvm_moves5288122


2. check lvm_moves file in that dir (we will need the path of this file):
root@aixdb2: /root # ls -l /tmp/.workdir.4935912.5288122_1 | grep lvm_moves
-rw-------    1 root     system      1341300 Oct 22 13:55 lvm_moves5288122

(it contains all the lp migartions, and reorgvg goes through on this file, line by line)


3. check the process of reorgvg:
root@aixdb2: /root # ps -ef | grep reorgvg
    root 5288122 5013742   0 13:52:16  pts/2  0:12 /bin/ksh /usr/sbin/reorgvg P_NAVISvg p_datlv

root@aixdb2: /root # ps -fT 5288122
 CMD
 /bin/ksh /usr/sbin/reorgvg P_NAVISvg p_datlv
 |\--lmigratepp -g 00c0ad0200004c000000012ce4ad7285 -p 00c80ef201f81fa6 -n 1183 -P 00c0ad021d62f017 -N 1565
  \--awk -F: {print "-p "$1" -n "$2" -P "$3" -N "$4 }

(lmigratepp shows: -g VGid -p SourcePVid -n SourcePPnumber -P DestinationPVid -N DestinationPPnumber)

lmigratepp shows the actual PP which is migrated at this moment
(if you ask few seconds later it will show the next PP which is migrated, and it uses the lvm_moves file)

4. check the line number of the PP which is being migrated at this moment:
(now the ps command in step 3 is extended with the content of the lvm_moves file)

root@aixdb2: /root # grep -n `ps -fT 5288122|grep migr|awk '{print $12":"$14}'` /tmp/.workdir.4935912.5288122_1/lvm_moves5288122
17943:00c0ad021d66f58b:205:00c0ad021d612cda:1259
17944:00c80ef24b619875:486:00c0ad021d66f58b:205

you can compare the above line numbers (17943, 17944) to how many lines lvm_moves file has.
root@aixdb2: /root # cat /tmp/.workdir.4935912.5288122_1/lvm_moves5288122 | wc -l
   31536

It shows that from 31536 lp migrations we are at this moment at 17943.

---------------------------------------

0516-304 : Unable to find device id 00080e82dfb5a427 in the Device
        Configuration Database.


If a disk has been deleted (rmdev) somehow, but from the vg it was not removed:
root@bb_lpar: / # lsvg -p newvg
newvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            31          20          06..00..02..06..06
0516-304 : Unable to find device id 00080e82dfb5a427 in the Device
        Configuration Database.
00080e82dfb5a427  removed           31          31          07..06..06..06..06


VGDA still shows the missing disk is part of the vg:
root@bb_lpar: / # lqueryvg -tAp hdisk2
...
Physical:       00080e82dfab25bc                2   0
                00080e82dfb5a427                0   4

VGDA should be updated (on hdisk2) but it is possible only, if the PVID is used with reducevg:
root@bb_lpar: / # reducevg newvg 00080e82dfb5a427

---------------------------------------

If you run into not being able to access an hdiskpowerX disk, you may need to reset the reservation bit on it:

root@aix21: / # lqueryvg -tAp hdiskpower13
0516-024 lqueryvg: Unable to open physical volume.
        Either PV was not configured or could not be opened. Run diagnostics.

root@aix21: / # lsvg -p sapvg
PTAsapvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdiskpower12      active            811         0           00..00..00..00..00
hdiskpower13      missing           811         0           00..00..00..00..00
hdiskpower14      active            811         0           00..00..00..00..00

root@aix21: / # bootinfo -s hdiskpower13
0


Possible solution could be emcpowerreset:
root@aix21: / # /usr/lpp/EMC/Symmetrix/bin/emcpowerreset fscsi0 hdiskpower13

(after this varyonvg will bring back the disk into active state)

87 comments:

Anonymous said...

really good information

Anonymous said...

nice article

aix said...

:)

siva said...

Hi AIX,

What is the difference between the mirroring the VG and mirroring the LV

Siva said...

Hi AIX ,

Having the following doubts

1) Does the hardware RAID 1 in server level is better than the mirroring the 2 disks using mirrorvg

2) I have mirrored storage LUN which has been mapped to the AIX server, for data availability whether this more than enough otherwise want to add another storage LUN in server and want to mirror the VG?

Please advice

aix said...

Hi,
Mirroring a VG is basically mirroring all the LVs in it.
"The mirrorvg command takes all the logical volumes on a given volume group and mirrors those logical volumes. This same functionality may also be accomplished manually if you execute the mklvcopy command for each individual logical volume in a volume group."

Balazs

aix said...

Hi Siva,

As RAID 1 is basically "Mirroring" technically there should not be any difference. As 2 writes are required, it could be important, where it takes place, because it takes some resources. (At LUN level in the storage box or at AIX level). The other point is if the 2 disks are in the same storage box and something happens with that box, then your VG will be affected. My personal opinion, if you have a critical system then 2 disks from different storage boxes and mirroring it at AIX level could help you to avoid problems with a storage box. (It needs some additional resources at CPU/Disk level, but if it is a critical system it does not matter..)

Hope this help,
Balazs

Siva said...

Hi Aix,

Thanks for immediate response, let me clearly explain

For the first question i got partial answer moreover there is any default RAID controller in AIX p series servers to built hardware RAID? whether the mirroring done through RAID controller is comparatively better then mirroring on VG ?

For the second question, LUN level means in storage level (RAID 1+0).

Regards
Siva

aix said...

Hi Siva,

for this you should check the system documentation of your model. (I never did comparative tests between RAID controller and VG level.)

Balazs

Manoj Suyal said...

Hi Balazs,

Nice article ! I just stared studying your articles and will surely read each of them .

I have one doubt related LVM.

why there are minimum three VGDA in a VG ? what is the location of LVCB? if a LV is spread around two or more PV then what would be LVCB location ?

Regards
Manoj Suyal

aix said...

Hi Manoj,

"why there are minimum three VGDA in a VG ?"
I don't know where did you hear this data, but this is not true! VGDA contains full info of the entire VG.
VG with 1 disk: Disk contains 2 copies of the VGDA (altogether 2 VGDAs)
VG with 2 disks: 1 disk contains 2 copies of the VGDA, the other disk contains 1 copy (altogether 3 VGDAs)
VG with 3 or more disks: each disk has 1 VGDA (altogether VGDA count is the same as the number of disks in VG)

The reason why 3 VGDAs are in VGs with 2 disks, is the quorum. Ouorum needs 51% or more to activate a VG. If we are losing the disk with 2 VGDAs VG will not be varied on, otherwise varyon is possible.

"what is the location of LVCB?"
First 512 byte of each logical volume in normal VGs (In big VGs it moved partially into the VGDA, and for scalable VGs completely.)

"if a LV is spread around two or more PV then what would be LVCB location ?"
Because first 512 byte of each LV is the LVCB, I think the first LP contains this info. (I never tested it, it is a guess.)

Regards,
Balazs

Anonymous said...

Dear aix, excelent article! just a question, are you from Argentina?
Thanks you very much

arreguez@yahoo.com

aix said...

Hi,

I'm from Hungary and I'm very happy to see people from Argentina are reading this blog :)

Regards,
Balazs

Anonymous said...

I'm preparing for 000-221 exam and these articles are helping me very much.

Thanks very much again.


Sergio

arreguez@yahoo.com

Unknown said...

HI AIx,

I want to know some questions about AIX, which is recenty asked in interview.

1. can we rename root vg ?
2. can we do TL upgrade in AIX using Alt_Disk when rootvg is mirrored?
3. what is main def between 5.3 and 6.1.

pls post the answer at earliest. I m eager to know.

Amreesh

aix said...

Hi Amreesh,

1. I've never tried, but I think it is not possible. You cannot umount rootvg filesystems (/usr, /var) so you can't varyoff rootvg. (If you would umount /usr, you would not be able issue any commands.)

2. Yes, first you need to break the mirror and after that create alt_disk image on the disk which has been freed up.

3. AIX 6.1 Differences Guide is your book for this question. Some examples: enhanced DUMP facility, enhanced WPAR, filesystem encryption...

Balazs

Unknown said...

thanks balaze
thanks from india for you quick response.
keep it up

Unknown said...

Any other posts ??? good one

Anonymous said...

Hi Balazs,

Nice article !

I want to know some questions about AIX, which is recenty asked in interview

1. What are the Attibutes of LVM?
2. Describe about LVM Adva/Dis.Adv?
3. What is the Limitation of VG?

Thanks you

aix said...

Hi Basanth,

I think if you read these pages you will be able to answer those questions:
http://aix4admins.blogspot.hu/2011/05/lvm-logical-volume-manager-lvm-manages.html
http://aix4admins.blogspot.hu/2011/05/volume-group-when-you-install-system.html

Anonymous said...

thanks balazs.....

Anonymous said...

Do you have any information about Monitoring the System performance in AIX ......
if you have it just give the link....

basanth

aix said...

Hi, beside the usual AIX commands: topas, nmon, you can read about other tools here:
https://www.ibm.com/developerworks/mydeveloperworks/wikis/home?lang=en#/wiki/Power%20Systems/page/Other%20Performance%20Tools

-Balazs

Anonymous said...

on AIX system primary_node" and "standby_node", I recently did the following:

In vg_dba, which consists of hdiskpower1 and hdiskpower4, on primary_node:
- use "extendvg" to add hdiskpower15 to vg_dba
- use "migratepv" to move contents of hdiskpower4 to hdiskpower15
- use "reducevg" to remove hdiskpower4 from vg_dba

This all worked, but since then there has been an inconsistency between
"primary_node" and "standby_node" because "standby_node" doesn't know about
the changes to "vg_dba".

* On "primary_node"
# lspv | grep vg_dba
hdiskpower15 00cbd242cfca60af vg_dba active
hdiskpower1 00cbd2427c71644d vg_dba active
* On "standby_node"
# lspv | grep vg_dba
hdisk8 00cbd2427c687ce7 vg_dba
hdisk38 00cbd2427c687ce7 vg_dba
hdiskpower1 00cbd2427c71644d vg_dba
"hdisk8" and "hdisk38" were in "hdiskpower4", which has been removed from
both nodes.

2 questions:
1. What must I do to correct the configuration on "standby_node"?
2. Is there any way to make the correction without taking the cluster down?

aix said...

I suppose you are using HACMP cluster. In this case, you should use always "smitty hacmp".
If you do these in smitty then everything will be done autmatically.

What you cn do:
-make hdiskpower15 known by standby node as well (cfgmgr, you should see hdiskpower15 there too)
- smitty hacmp -> c-spoc -> HACMP Logica Volume Management -> Synchronize a Shared Volume Group Definition

If that does not work, generally you can do:
- on primary: varyonvg -bun vg_dba
- on standby: importvg -L vg_dba hdiskpower15
- on primary: varyonvg vg_dba

But, be careful and look a deeper in this (google mor in this subject), as you can miss something what you did not write to me.

Anonymous said...

hi balazs....

what is the first column in virtual memory in AIX...??

aix said...

Hi Basanth,

it depends on which command you use. You can find more info under Performance section -> Memory/vmstat/VMM

Vasanth G said...

Hi,

First do remember you are working in HACMP. Any changes( HDD replacement, FS , etc) if are doing in Primary / home node.you have to do it through "s-cpoc" only...any changes done outside, wont be updates on standby/secondary server.

S-cpoc automatically activates some rsct demons and do changes/Synchronize with standby/secondary server aslo.

Remember : changes should be done ony througt S-cpoc..

Thank you,
Vasanth G

Anonymous said...

Hi atom,,

A LV created in VIO acting as rootvg in client server. so if want to increase rootvg size in client server, what are the steps i need to perform?

Regards,

BITTU

aix said...

Hi Bittu,
If your rootvg is mirrored on client then I would free up 1 disk first, increase disk size, then mirror back. After that the same with other disk.
I would do:
on client: unmirrorvg hdiskx, reducevg hdiskx
on vio server: extendlv 10G
on client: after 1 minute you will see disk size increased (getconf DISK_SIZE /dev/hdiskx)
on client: extendvg rootvg hdiskx, mirrorvg rootvg

Then with the other disk.

Anonymous said...

Hi Atom,

The scenario is this.....

LV is mirrored in VIO server.. and same is acting as rootvg in client(here not mirrored)..

As per your observations, Once LV increased the affect will take automatically in client server, am i right..?

Regards,
Bittu

aix said...

Yes, automatically (about 20-30 seconds).

Regards,
Balazs

baski said...

Hi Balazs, hope u r doing good.. again, I've come to u with a problem ;)

# redefinevg -d hdiskpower17 appvg
0516-1791 redefinevg: lvm_rec for PV hdiskpower1 has VGID 0000000000000000 (00c629d500004c000000012bd673c48d expected)
#


I don't see any VGDA in the hdiskpower1. So in the PV state, the disk shows as MISSING.


# lsvg -p appvg
appvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdiskpower17 active 201 1 00..00..00..00..01
hdiskpower1 missing 336 55 00..00..00..00..55
#


There are no issues with the disk. All looks good in powermt and syminq commands. Is there a way to copy the vgda? Just to let you know, i don't have any disk free at the moment to add to this VG.

Thanks for considering.

Pravin!

Anonymous said...

hi can i know if the lvs are differ ,then migratelp is possible?

aix said...

Hi,

Some ideas...I would do a "varyonvg ", because varyonvg acts as a self-repair program for VGDAs. If this doesn't help, then command "synclvodm -v " probably can do something:
"synclvodm: This would synchronize and rebuild the logical volume control block (LVCB), the ODM and the volume group descriptor area (VGDA)." (But please do some checks/research before issuing this command.)

Balazs

aix said...

Hi, what do you mean differ?
I guess it should work, because it works with Logical Partitions and Physical Partitions.

Anonymous said...

Hello ,I have used t factor already and i wanted to know whether my vg is normal orbig or scalable...how can i find it

aix said...

Hello, if you check a little above ("Changing factor value (chvg -t) of a VG") on this page, you will find a chart, with max PVs and t factor...

Unknown said...

Hi Balazs,

Had a few quick questions:

In AIX, let’s say you lost the OS disk and there is no backup copy available. You reinstalled OS on a replacement disk, but how will you find out the names of VGs on Data disks. Assume that you do not know what those names were beforehand?

In AIX, a Mirrored Volume Group is now degraded due to a failed disk. All logical partitions are now open/stale status.
What steps would you take to correct this issue.

Unknown said...

Hi,
How to increase the VG size when the VG is in Mirroring.
For example, when the VG is in 2-way mirroring we need to increase the size of the VG for +10GB, means do I need to extend the VG using two 10GB disks or directly if we increase with 20GB does it shares the 10GB to each mirrored copy?
Could you please explain?
I have an activity on tomorrow Please help me to complete this successfully.

aix said...

Hi, you should check the mirror policy on the LVs in the VG. If it allows to have mirror on the same disk, you can use a 20GB disk, but if it is not allowed you need 2X10GB disk.

Unknown said...

Hi AIX Spoc,

For the above problem the below are Mirror Policy status, can you please confirm whether do I need to add two disks of 10GB to increase the size of the VG or one 20GB disk is enough to increase the size of the VG.

MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes (superstrict)

Thanks for your support.

aix said...

Hi, "Each LP copy on separate pv:yes", means the mirror should be on different disk, so 2X10GB is needed.

Unknown said...

Thanks for your immediate response,


If the Node is in Cluster means do I need to request another 2 * 10GB LUNs to another cluster node as well.


Example:If it is 2 node cluster and the cluster version is 6.1 means

Do I need to do have Same LUN ID for the two cluster nodes or not required??

Do I need to do all the things like assign PVID to the newly configured LUNs, change the properties of the PV according to the old disks available in the VG and Increase the VG and Extend the Filesystem.

Do I Need to do all the above things on the other node as well or will it affect automatically??

If it will effect automatically means upto where do I need to perform the things??


Please help to clarify the above doubts.

Thanks to all.

aix said...

Hi, the same disks should be assigned to both nodes. (You have to send WWPNs from both nodes to SAN team to assign disks to both nodes.) After you see the new disks (cfgmgr on both nodes), give PVID to the disks, and do every action on "smit hacmp" menus. (In this case what you do will be done on both nodes.) I suggest you to read some IBM Redbooks as well in this subject.

Anonymous said...

Hi,
We have VIO Cluster and It has storage from EMC VMAX
Storage admin has shared the devices between these two cluster servers but we could able to see one device in only one server.
In other node it is missing what could be the reason behind it.

Key points:
Only one device is missing, rest all devices are available in both the nodes.
From storage the device address is 1, so it is the first device.

aix said...

Hi, you could check reserve policy setting on clusters it should be on no.

Anonymous said...

hi,

how to move vg from one pv to another pv?

aix said...

Hi, please check command "migratepv", hope that helps.

Anonymous said...

Hi...

I have many VGs on my system. I have created them whith the same command. I haven't specified the PP size while creating them. Logically, the PP size should be the same (default value) in all the VGs. However, I have different PP size : 32 MB, 256 MB, 64 MB and 128 MB. can you please clarify this please...

Anonymous said...

lsvg vg001
VOLUME GROUP: vg001 VG IDENTIFIER: 00303fcb00004c0000000110ebd71af2
VG STATE: active PP SIZE: 64 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 719 (46016 megabytes)
MAX LVs: 256 FREE PPs: 78 (4992 megabytes)
LVs: 2 USED PPs: 641 (41024 megabytes)
OPEN LVs: 2 QUORUM: 1 (Disabled)
TOTAL PVs: 1 VG DESCRIPTORS: 2
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 1 AUTO ON: yes
MAX PPs per VG: 32512
MAX PPs per PV: 1016 MAX PVs: 32
LTG size: 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable
PV RESTRICTION: none
VLPAR


$ lsvg vg002
VOLUME GROUP: vg002 VG IDENTIFIER: 00303fcb00004c0000000110ebd74856
VG STATE: active PP SIZE: 32 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 319 (10208 megabytes)
MAX LVs: 256 FREE PPs: 23 (736 megabytes)
LVs: 2 USED PPs: 296 (9472 megabytes)
OPEN LVs: 2 QUORUM: 1 (Disabled)
TOTAL PVs: 1 VG DESCRIPTORS: 2
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 1 AUTO ON: yes
MAX PPs per VG: 32512
MAX PPs per PV: 1016 MAX PVs: 32
LTG size: 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable
PV RESTRICTION: none
VLPAR

aix said...

Hi, this is written in "man mkvg":
"The default value for 32 and 128 PV volume groups will be the lowest value to remain within the limitation of 1016 physical partitions per PV. The default value for scalable volume groups will be the lowest value to accommodate 2040 physical partitions per PV."

Avinash said...

Hi Balazs,
Thanks for your wonderful blog, this has helped a lot.
Further can you please let me know how do we convert normal vg to bigvg for concurrent enhanced volume group.
The volume group is used by hacmp/gpfs, could you please help m eon this.
Thanks a ton..
Avinash

aix said...

Hi Avinash,
I have no experience about converting VGs with GPFS. I suggest do some tests on test system first...I do not have any systems with GPFS at the moment.

Regards,
Balazs

prasoon said...

Hello Balazs,

I have a query. I want to copy oracle data installed and configured on old box to new box. The steps that I can think is given below;
1- Mirrored the new SAN Lun's with pre-existing SAN Lun's at storage level for oracle mount points.
2- Once the mirroring complete then break the mirror among the Lun's at storage level.
3- Map and Scanned the mirrored Lun's on new box.
4- Mount the oracle file system with new Lun's.
5- Test DB connectivity etc.
Please suggest the commands if the above steps are correct or suggest the best possible way to copy without going for downtime of DB's on old box and not putting any load on network.

R'gs
Prasoon

Anonymous said...

Hi Balazs. I have an lv that I need to expand for a project, but no other pvs free to do it. As I see it there is some room left on the current pv, but don't know if it's possible to use remaining free PPs.

Here is some info:
[root]: /usr/eb2> lslv lv04
LOGICAL VOLUME: lv04 VOLUME GROUP: emcmfs
LV IDENTIFIER: 00001c2a00004c0000000104edf1b091.2 PERMISSION: read/write
VG STATE: active/complete LV STATE: opened/syncd
TYPE: jfs WRITE VERIFY: off
MAX LPs: 512 PP SIZE: 8 megabyte(s)
COPIES: 1 SCHED POLICY: parallel
LPs: 500 PPs: 500
STALE PPs: 0 BB POLICY: relocatable
INTER-POLICY: minimum RELOCATABLE: yes
INTRA-POLICY: middle UPPER BOUND: 32
MOUNT POINT: /usr/eb2 LABEL: /usr/eb2
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?: NO

[root]: /usr/eb2> lspv hdisk27
PHYSICAL VOLUME: hdisk27 VOLUME GROUP: emcmfs
PV IDENTIFIER: 000b225d1055374c VG IDENTIFIER 00001c2a00004c0000000104edf1b091
PV STATE: active
STALE PARTITIONS: 0 ALLOCATABLE: yes
PP SIZE: 8 megabyte(s) LOGICAL VOLUMES: 2
TOTAL PPs: 539 (4312 megabytes) VG DESCRIPTORS: 2
FREE PPs: 38 (304 megabytes) HOT SPARE: no
USED PPs: 501 (4008 megabytes) MAX REQUEST: 256 kilobytes
FREE DISTRIBUTION: 00..00..00..00..38
USED DISTRIBUTION: 108..108..107..108..70

[root]: /usr/eb2> lsvg emcmfs
VOLUME GROUP: emcmfs VG IDENTIFIER: 00001c2a00004c0000000104edf1b091
VG STATE: active PP SIZE: 8 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 539 (4312 megabytes)
MAX LVs: 256 FREE PPs: 38 (304 megabytes)
LVs: 2 USED PPs: 501 (4008 megabytes)
OPEN LVs: 2 QUORUM: 2
TOTAL PVs: 1 VG DESCRIPTORS: 2
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 1 AUTO ON: yes
MAX PPs per VG: 32512
MAX PPs per PV: 1016 MAX PVs: 32
LTG size: 128 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable

[root]: /usr/eb2> lqueryvg -tAp hdisk27
Max LVs: 256
PP Size: 23
Free PPs: 38
LV count: 2
PV count: 1
Total VGDAs: 2
Conc Allowed: 0
MAX PPs per PV 1016
MAX PVs: 32
Conc Autovaryo 0
Varied on Conc 0
Logical: 00001c2a00004c0000000104edf1b091.1 loglv04 1
00001c2a00004c0000000104edf1b091.2 lv04 1
Physical: 000b225d1055374c 2 0
Total PPs: 539
LTG size: 128
HOT SPARE: 0
AUTO SYNC: 0
VG PERMISSION: 0
SNAPSHOT VG: 0
IS_PRIMARY VG: 0
PSNFSTPP: 4352
VARYON MODE: 0
VG Type: 0
Max PPs: 32512

In the end, /usr/eb2 must be 200 MB+

If I can use all available PPs that would be great, but I believe I need at least 12 PPs (96 MB) to reach my threshold.

Can I just do: extendlv lv04 12 (or 38)? Or do I have to do something with the mirroring first?

Many thanks in advance.

Unknown said...

hi,
i am reading ur articles regularly i hav some questions.
1.What is mirror write consistency?
2.Why you need to synchronize after having LV mirrored. Detail?
3.How will you configure newly allocated LUN to your VG?
4.How LVM works with ODM?

aix said...

Hi,
1. please check this link: http://aix4admins.blogspot.hu/2011/05/lvm-logical-volume-manager-lvm-manages.html, at mirror write consistency section.
2. because mirroring just allocating the necessary Physical Partitions, but actual updated information will be written by synchronization.
3. please read the manual of extendvg
4. All necessary info for LVM (VG name, VG ID, PV ID ... and its relations) are stored in ODM (and in VGDA as well). Depending on the command, it will refer to ODM for correct output.

Unknown said...

thank u for ur valuable reply sir.

Anonymous said...

HI Aix4admins,
how to do the 3rd copy of mirroring for any vg?
how to implement sshd?
how to you know how many % of migration completed?
If a user tells that he is not able to login so what will you do?

Anonymous said...

hai sir can u please reply for these questions pls?

aix said...

Hi, please check "Rules for comments" section on the main page of this blog.

Sahasra said...

dear Sir,

how to unclone the VG

aix said...

Please be more specific, but if you mean alt_disk uncloning the command is: alt_rootvg_op -X

Unknown said...

Hi,

How to check the exported VGs from the server in AIX..

Thanks,

aix said...

Hi, read VGDA on the disk.

ra said...

hi Balazs,

if rootvg is mirrored then how can i increase the size of a file system created in /home directory?

do i need to unmirror rootvg .. increase the size first and then re-mirror it?

Thanks
Rahul

aix said...

Hi, no, just simply increase the size of the filesystem, mirrors will be handled automatically by AIX.

ra said...

so you mean if i increase the size of file system on rootvg in hdisk0 ...automatically it will be increased on hdisk1 rootvg......?

what if the size of filesystem is not increasing after running the command to increase fs on rootvg mirror?

thanks for all the help...

regards
Rahul

aix said...

1. yes
2. please don't complicate...please read some document/Redbooks about AIX LVM and mirroring.

ra said...

alright, will do that .. thanks for your timely help balazs .. great job ..

Rahul

Sandeep said...

How does a VG state change from defined to Available or Vice-versa . is it on reboot ?
For Eg
# lsdev|grep "Volume group"
rootvg Defined Volume group
testvg Available Volume group

testvg is something I just created

Sandeep said...

I know it can be changed to defined state by changing "status = 1" to "status = 0" in CuDv by an odmchange

Unknown said...

Hi,
Can you please differentiate Concurrent VG and Enhanced concurrent VG? sorry if question is very basic!
Thanks

aix said...

Hi,

Both of them are needed for cluster (PowerHA). If you have a cluster, where application is running on both nodes (resource group is online on both cluster server) you need Concurrent VG. If your Resource Group is online only on 1 server (the other is standby) you need Enhanced Concurrent VG.

Hope this helps,
Balazs

Anonymous said...

Hi Team,

Please let me know the causes for unable to break the Rootvg

Unknown said...

I have a question, I have a LUN hdisk26, and I need to assign it to AIX clustered servers,
does anyone know how to do

Unknown said...

i have a vg which is having 3 disks in it. i have to replace one hard disk , vg size is 300 Gb and can i get a storage lun of this size and migrate the data of this vg to lun.

SolAix said...

Hi Friends, I am Rajendra..
wants the total process to install vio server on ibm p flex 260 server, and diff between box server and flex server while installing vios server and client?

oyao aixblogspot.com said...

what is the max VG limit in a server

Manoj Suyal said...

Yes you can.

Anonymous said...

What is the difference between exportvg & reducevg
when to use exportvg & reducevg
eg. If I want to delete my VG (permanently) & disk also delete permanently, which process I need to follow.

1) unmount the fs
2) remove fs
3) varyoffvg
3) do exportvg
4) delete the disk

or

1) unmount the fs
2) remove fs
2) do reducevg -df hdiskX

Anonymous said...

How to mirror LVs in different Volume Groups

Ben said...

This is a wonderful info.. I am a new AIX admin and rootvg is consuming only 30G on 500G disk on VIOS server.
I wonder if I could allocate a hdisk prior to the VIOS creation so that I would only consume around 30G.
Or, move/resize the rootvg after the VIOS creation. I hate to waste 450G of space just for rootvg, especially when it's for VIOS.
Any help is greatly appreciated!!
Thanks!

aix said...

If rootvg on VIOS has local disks (most probably yes), then you cannot resize it. For other LPARs which has rootvg from SAN you can replace to a smaller one.

Unknown said...

Hi "unknown", in case you couldn't resize your VIOS because it had a local disk, remember that you can always use it to create a Media Repository (ISO library) for your LPARs, so can keep there the ISOs for AIX 7.2, Spectrum Protect, maybe even MKSYSBs, etc. That way you can "use" all that wasted space.

You can refer to the following IBM KB:
http://www-01.ibm.com/support/docview.wss?uid=isg3T1013047

Anonymous said...

What is and why to use PPs?