dropdown menu

NIM - nimadm

AIX migration (upgrade) with nimadm:

AIX migration (or upgrade) is the process of moving from one version of AIX to another verson. (for example from AIX 5.3 to 6.1 or 7.1)

This method will preserve all user configurations, and will update the installed filesets and optional software products. The main advantage with the migration installation compared to a new and complete overwrite is that most of the filesets and data is preserved on the system. It keeps all the directories such as /home, /usr, /var, logical volumes information and configuration files. The /tmp file system is not preserved during the migration of the system.

Migration can be achieved by:
- Migration by using NIM: full description can be found at http://www.ibm.com/developerworks/aix/library/au-aix-system-migration-installation/index.html
- Migration by using a CD or DVD drive: DVD must be inserted, and instructions on the screen has to be followed
- Migration by using an alternate disk migration. : This can be done with command "nimadm"
- Migration by using mksysb: This can be done by using NIM or with command "nimadm"


-----------------------------

Migration with nimadm:

NIMADM: Network Install Manager Alternate Disk Migration
(It means installation occurs through the network and it is written to another disk.)

The nimadm command is a utility that creates a copy of rootvg to a free disk and simultaneously migrate it to a new version of AIX in the background. This command can be called from the NIM master, and it will copy the NIM client's rootvg to the NIM master (via rsh). It will do migration on the NIM master, and after that copy the data back to the NIM client (to the specified disk). When the system is rebooted new AIX version will be loaded.

Advantages:
- Migration is happening while the system is online, the only downtime is the reboot time.
- Extra load will be only on the NIM master, NIM client is able to run any additional overhead.
- If problems with new version, fallback is only 1 reboot to the old image.

-----------------------------

Migration with Local Disk Caching vs. NFS:

By default nimadm uses NFS for transferring data from client. Local disk caching on the NIM master allows to avoid too many NFS writes, which can be useful on slow networks (where NFS is a bottleneck). This function can be invoked with the "-j VGname" flag. With this flag nimadm command creates file systems on the specified volume group (on the NIM master) and uses streams (rsh) to cache all of the data from the client to these file systems. (Without this flag NFS read/write operations will be used for data transfer and NFS tuning may be required to optimize nimadm performance.)

Local disk caching could have improved peformance on slow networks, and allows TCB enabled systems to be migrated with nimadm.
(Trusted Computing Base is a security feature which periodically checks the integrity of the system. Some info about TCB can be found here: http://www.ibm.com/developerworks/forums/thread.jspa?threadID=183572)

-----------------------------

PREREQUISITES, PREPARATION, MIGRATION, POST_MIGRATION


I. Prerequisites on NIM master:


1. NIM master level
   NIM master must be at the same or higher level than the level being migrated to.

2. lpp_source and spot level
   The selected lpp_source and SPOT must match the AIX level to which you are migrating.

3. bos.alt_disk_install.rte
   The same level of bos.alt_disk_install.rte must be installed in the rootvg (on NIM master) and in the SPOT which will be used.
   Check on NIM master: lslpp -l bos.alt_disk_install.rte
   Check in SPOT:       nim -o lslpp -a filesets='bos.alt_disk_install.rte' <spot_name>
   (It is not necessary to install the alt_disk_install utilities on the client)

   0505-205 nimadm: The level of bos.alt_disk_install.rte installed in SPOT
   spot_6100-06-06 (6.1.6.16) does not match the NIM master's level (7.1.1.2).

   If NIM Master is on 7.1 but you would like to migrate from 5.3 to 6.1, SPOT and installed version will differ. 2 ways to correct this:
   - Install 7.1 version of this fileset to the 6.1 SPOT or
   - remove 7.1 verson of this fileset from NIM, temporarily install 6.1 version and after the migration install back 7.1 version.

4. free space in vg
   In the VG, which will be used for migration, must be enough free space (about the size of the client's rootvg)

5. rsh
   Check if client can be reached via RSH from NIM master: rsh <client_name> oslevel -s

6. NFS mount (only if not Local Disk Caching is used)
   NIM master must be able to perform NFS mounts and read/write operations. (If "-j VGname" is used in the nimadm command, this is not needed!)



II. Prerequisites on NIM client:

1. Hardware and firmware levels
   The client's hardware and software must be at the required level to support the level that is being migrated to.

2. free disk
   Client must have a free disk, large enough to clone rootvg

3. NFS mount
   NIM client must be able to perform NFS mounts and read/write operations.

4. multibos
   The nimadm command is not supported with the multibos command when there is a bos_hd5 logical volume.

5. lv names
   lv names must not be longer than 11 characters (because they will get an alt_ prefix during migration, and AIX limitaion is 15 characters for an lv).

6. TCB (Trusted Computing Base is a security feature which periodically checks the integrity of the system.)

   If you use disk caching option (-j flag) it does not matter if TCB is turned on or off (usually it is not turned on)
   However if you omit "-j" flag (NFS read/write) TCB should be turned off. (TCB needs to access file metadata which is not visible over NFS).
   Command to check if TCB is enabled/disabled: odmget -q attribute=TCB_STATE PdAt

7. ncargs (Specifies the maximum allowable size of the ARG/ENV list when running exec() subroutines.)
   This is a bug: if ncargs is customized to a value less than 256, it resets all other sys0 attributes to default value.  
   So make sure ncargs value is at least on 256: lsattr -El sys0 -a ncargs  (chdev -l sys0 -a ncargs='256' )



III. Preparation on NIM client:

1. create mksysb

2. check filesets, commit: lppchk -v, installp -s (smitty commit if needed)

3. pre_migration script: /usr/lpp/bos/pre_migration (it will show you if anything must be corrected, output is in /home/pre_migration...)

4. save actual config (mounts, routes, filesystems, interfaces, lsattr -El sys0, vmo -a, no -a, ioo -a ...)

5. save some config files (/etc/motd, /etc/sendmail.cf, /etc/ssh... (/home won't be overwritten these can be saved there))
   for ssh this can be used:
   # ssh -v dummyhost 2>&1 | grep "Reading configuration" (it will show location of ssh_config: debug1: Reading configuration data /etc/ssh/ssh_config)
   # cp -pr <path_to_ssh_dir> /home/pre_migration.<timestamp>/ssh

6. free up disk: unmirrorvg, reducevg, bosboot, bootlist



IV. Migration (on NIM master):

nimadm -j nimadmvg -c aix_client1 -s spot_6100-06-06 -l lpp_source_6100-0606 -d hdisk1 -Y

   -j: specifies VG on master which will be used for migration (filesystems will be created here and client's data is cacahed here via rsh)
   -c: client name
   -s: SPOT name
   -l: lpp_source name
   -d: hdisk name for the alternate root volume group (altinst_rootvg)
   -Y: agrees to the software license agreements for software that will be installed during the migration.

Migration logs can be found in /var/adm/ras/alt_mig directory. There will be 12 phases after that, you will get back the prompt.

Check if alt_inst_rootvg exist on client and bootlist is set correctly.



V. Post migration checks on client:

1. check filesets: oslevel -s, lppchk -v, instfix -i | grep ML (update/correct/commit other softwares/filesets if needed)

2. check config, config files: (sys0, vmo, tunables: tuncheck -p -f /etc/tunables/nextboot) (maxuproc: lsattr -El sys0, chdev -l sys0 -a maxuproc=<value>)

3. post_migration script: /usr/lpp/bos/post_migration (it can run for a long time, 5-10 minutes)

4. others: mksysb, smtctl, rsh, rootvg mirror

-----------------------------
-----------------------------
-----------------------------

NIMADM MIGRATION with MKSYSB and ALT_DISK_MKSYSB

This is a different method as you will migrate an mksysb to a higher level then restore that mksysb to a free disk.

I did this migration from 6.1 TL6 SP6 to 7.1 TL2 SP2

1. on client: update alt_disk filesets to the new version.
  (this is needed for alt_disk_mksysb, because mksysb and alt_disk filesets has to be on the same level)

  I updated these filesets to 7.1 (however AIX was on 6.1):
 
  bos.alt_disk_install.boot_images    7.1.2.15  COMMITTED
  bos.alt_disk_install.rte          7.1.2.15  COMMITTED


2. on client: unmirror rootvg, free up a disk
  (if rootvg is mirrored you should unmirror it, and free up 1 disk, so for mksysb restore 1 disk will be enough.)
  (otherwise you will get mklv failures, becuse  system cannot fulfill the allocation request.)

  # unmirrorvg rootvg hdiskX
  # reducevg rootvg hdiskX
  # bosboot -ad /dev/hdiskY
  # bootlist -m normal hdiskY

3. on client: create mksysb locally:
  # mksysb -ie /mnt/bb_lpar61_mksysb

  (copy it to a NIM Master server, which is already at that level, which we want to migrate to)

4. on NIM master: create a resource from the mksysb file

  # nim -o define -t mksysb -a server=master -a location=/nim/mksysb/bb_lpar61_mksysb bb_lpar61_mksysb

5. on NIM master: migrate mksysb resource to new AIX level
  # nimadm -T bb_lpar61_mksysb -O /nim/mksysb/bb_lpar71_mksysb -s spot_7100-02-02 -l lpp_7100-02-02 -j nimvg -Y -N bb_lpar71_mksysb

    -T - existing AIX 6.1 NIM mksysb resource
    -O - path to the new migrated mksysb resource
    -s - spot used for the migration
    -l - lpp_source used for the migration
    -j - volume group which will be used on NIM master to create file systems temporarily (with alt_ prefix)
    -Y - agrees to license agreements
    -N - name of the new AIX 7.1 mksysb resource

  (after new mksysb image has been ceated, copy it to the client)

6. on client: restore new 7.1 mksysb to a free disk with alt_disk_mksysb
  # alt_disk_mksysb -m /mnt/bb_lpar71_mksysb -d hdiskX -k

    -m - path to the mksysb
    -d - disk used for restore
    -k - keep user defined device configuration

  (after that you can reboot system to the new rootvg.)

-----------------------------

nimadm fails: 0505-160, 0505-213


Solution:

#nim -o showres lpp_source |grep sysmgt.websm.webaccess

The problem is with the package sysmgt.websm.webaccess. The fileset sysmgt.websm.webaccess, part of the sysmgt.websm package is starting processes out of its post_i script that cause problems for installations in SPOT environments and it is also affecting alternate disk install migration (nimadm).

The workaround for this is going to be to remove sysmgt.websm package from the lpp_source and rebuild the .toc and then do the nimadm process.


-----------------------------

/usr/sbin/nimadm[1147]: domainname:  not found

I have found this: http://www-01.ibm.com/support/docview.wss?uid=isg1IV32979

if bos.net.nis.client is not installed on the nim master,
nimadm will output error :
/usr/sbin/nimadm[1147]: domainname:  not found.

It has no impact on the nimadm process, only the error
message should be hidden.

Local fix:
None, the error message can be ignored, the code behind
will get that domainname is empty.

-----------------------------

umount: error unmounting /dev/lv11: Device busy
umount: error unmounting /dev/lv10: Device busy
umount: error unmounting /dev/lv01: Device busy
0505-158 nimadm: WARNING, unexpected result from the umount command.
0505-192 nimadm: WARNING, cleanup may not have completed successfully.


This is happening because, there are running processes in the above mentioned filesystems.

These filesystems were created during migation, but when AIX is upgraded a fileset started running this process there:

   root 14811364        1   0 12:21:19  pts/2  0:06 /usr/java5/jre/bin/java -Dderby.system.home=/usr/ibm/common/acsi/repos -Xrs -Djava.library.path=...

This process is some System Director stuff, so if you don't have that you can get rid of it.

Workaround:
After resterting nimadm  migration, I monitored these filesystems. (fuser -cux <fs_name>)
Around 70% of the installation, this process popped up. I waited install goes about 90% and I did "kill <pid>".

After this nimadm was successful.

21 comments:

Unknown said...

I Like to share this link also ...
http://www.ibm.com/developerworks/aix/library/au-migrate_nimadm/

aix said...

Thanks, I'll take a look on it :-)

Anonymous said...

HI,

what is rsh access?

Srujana said...

Your site is awesome :-)

Anonymous said...

That NIMADM with mksysb is awesome process .....thanks for ur updates...!!

Anonymous said...

Hi,

I would like to enable to TCB on running AIX LPARs. I got the procedure for enabling TCB using ODM commands. is this recommended way to implement this change. Please comment on it from your experience. Is it must to enable TCB on AIX LPARs.

Anonymous said...

Hi,

Using nimadm am migrating from 5.3-TL12 to 6.1 TL08, untill phase 9 ok, followed the steps, any solution plz..
Syncing cache data to client ...
restore: 0511-133 There is a data read error.: A connection with a remote socket was reset by that socket.
Ignoring data and continuing.
restore: 0511-123 The volume on - is not in backup format.
0505-213 nimadm: ATTENTION, /usr/sbin/restore returned an unexpected result.
0505-217 nimadm: Error syncing cache data to client.

After that nimadm fails, bootdisk is setback to hdisk0

Unknown said...
This comment has been removed by the author.
Anonymous said...

can anyone help on this error.

Unknown said...

local disk caching : this method is the one which is used to override the conventional NFS method by using RSH. BUt when i triggered the nimadm using -j option, the NIM server is still trying to mount the lpp_source through NFS on the client. Is this the normal behaviour ?

Unknown said...

As per my knowledge without login into the client server we are going to execute the nim client comands in the nim master......

Pjg said...

We are using this procedure of "NIMADM MIGRATION with MKSYSB and ALT_DISK_MKSYSB" for most of our AIX migrations. But from recently its started giving problem of when we restart server using nre rootvg it always halts either code 517 or 518. Strange but normally these are 5.3 to 7.1 migration. Guys do u had any of such cases.

Thanks to know

Anudeep said...

I updated these filesets to 7.1 (however AIX was on 6.1): --------------> How to update the filesets here. Could you please elobrate

Venkat said...

I love this site. Thank you. much appreciated.

aix said...

welcome :)

balcantara said...

Hi guys, anyone has used the nimadm to upgrade an mksysb (AIX 5.3) to AIX 7.2? and then change the jfs to jfs2?

Anonymous said...

I'm facing the same issue in Phase 9. Can anyone please assist? Thanks.

Unknown said...

Hello Every, I was doing OS migration on mksysb image and it is failed on below phase.

Executing nimadm phase 9.
+-----------------------------------------------------------------------------+
Adjusting client file system sizes ...
Adjusting size for /
Adjusting size for /admin
Adjusting size for /home
Adjusting size for /opt
Adjusting size for /tmp
Adjusting size for /usr
Adjusting size for /usr/tivoli/tsm
Adjusting size for /var
Adjusting size for /var/adm/ras/livedump
Backing up cache data to mksysb file /export/nim/mksysb/nimrtw001_71_mksysb ...
rm: cannot remove directory /export/nim/mksysb/nimrtw001_71_mksysb
Is a directory
/usr/sbin/nimadm[1147]: /export/nim/mksysb/nimrtw001_71_mksysb: cannot create
0505-240 nimadm: Error backing up cache data to mksysb file.
Cleaning up alt_disk_migration on the NIM master.
Unmounting client mounts on the NIM master.
forced unmount of /nimrtw001_71_mksysb_mm_alt/alt_inst/var/adm/ras/livedump
forced unmount of /nimrtw001_71_mksysb_mm_alt/alt_inst/var/adm/ras/livedump
forced unmount of /nimrtw001_71_mksysb_mm_alt/alt_inst/var
forced unmount of /nimrtw001_71_mksysb_mm_alt/alt_inst/var
forced unmount of /nimrtw001_71_mksysb_mm_alt/alt_inst/usr/tivoli/tsm
forced unmount of /nimrtw001_71_mksysb_mm_alt/alt_inst/usr/tivoli/tsm
0505-158 nimadm: WARNING, unexpected result from the fuser command.
umount: Could not find anything to unmount
0505-158 nimadm: WARNING, unexpected result from the fuser command.
umount: Could not find anything to unmount
forced unmount of /nimrtw001_71_mksysb_mm_alt/alt_inst/usr
forced unmount of /nimrtw001_71_mksysb_mm_alt/alt_inst/usr
forced unmount of /nimrtw001_71_mksysb_mm_alt/alt_inst/tmp
forced unmount of /nimrtw001_71_mksysb_mm_alt/alt_inst/tmp
forced unmount of /nimrtw001_71_mksysb_mm_alt/alt_inst/opt
forced unmount of /nimrtw001_71_mksysb_mm_alt/alt_inst/opt
forced unmount of /nimrtw001_71_mksysb_mm_alt/alt_inst/home
forced unmount of /nimrtw001_71_mksysb_mm_alt/alt_inst/home
forced unmount of /nimrtw001_71_mksysb_mm_alt/alt_inst/admin
forced unmount of /nimrtw001_71_mksysb_mm_alt/alt_inst/admin
forced unmount of /nimrtw001_71_mksysb_mm_alt/alt_inst
forced unmount of /nimrtw001_71_mksysb_mm_alt/alt_inst
0505-158 nimadm: WARNING, unexpected result from the umount command.
0505-192 nimadm: WARNING, cleanup may not have completed successfully.

Anonymous said...

Hi all, is there a way to do some kind of "manual" alternate disk migration without a NIM server ? (using for example alt_disk_copy, alt_rootvg_op, chroot...)

Harry said...

Client running is 7.1.
Using the mksysb from 7.1 to 7.2 upgade the mksysb restore was alright with the bootlist updated.
Once I rebooted the server, it wouldn't come up on the network.
any known issues?

Harry said...

nimadm -T bb_lpar71_mksysb -O /nim/mksysb/bb_lpar72_mksysb -s spot_7200-05-05 -l lpp_7200-05-05 -j nimvg -Y -N bb_lpar72_mksysb

after completion. the server didn't come up. its a test bed though