dropdown menu

POWERVM - LPM

Live Partition Mobility

Live Partition Mobility (LPM) allows you to migrate partitions from one physical server to another while partition is running.

When all requirements are satisfied and all preparation tasks are completed, then HMC verifies and validates the LPM environment. This HMC function has the responsibility of checking that all hardware and software prerequisites are met. If this validation turns out to be successful, then partition migration can be initiated by using the HMC GUI or the HMC command-line interface.

Types of LPM:
- Active partition migration is the ability to move a running LPAR to another system without disrupting the operation of the LPAR.
- Inactive partition migration allows to move a powered-off LPAR from one system to another.
- Suspended partition migration allows to move a suspended LPAR

PowerVM allows up to 8 concurrent migrations per Virtual I/O Server and up to 16 per system.

-------------------------------------------

Mover service partition (MSP):
MSP is an attribute of the Virtual I/O Server partition. (This has to be set on HMC for the VIOS LPAR). Two mover service partitions are involved in an active partition migration: one on the source system, the other on the destination system. Mover service partitions are not used for inactive migrations.

-------------------------------------------

Virtual asynchronous services interface (VASI):

The source and destination mover service partitions use this virtual device to communicate with the POWER Hypervisor to gain access to partition state. The VASI device is included on the Virtual I/O Server, but is only used when the server is declared as a mover service partition.

--------------------------------------------

LPM overview:

1. Partition profile (active profile only) copied from source to target FSP.
2. Storage is configured on the Target.
3. Mover service partitions (MSP) is activated.
4. Partition migration started, copying memory pages (retransfer necessary pages)
5. When majority of memory pages has moved, system activated on target.
6. Final memory pages moved.
7. Cleanup storage and network traffic.
8. Storage resources are deconfigured from the source.
9. Partition profile removed from source server


Active partition migration involves moving the state of a partition from one system to another while the partition is still running. During the migration the memory of the LPAR will be copied over to the destination system. Because the partition is active, a portion of the memory will be changed during the transfer. The hypervisor keeps track of these changed memory pages, on a dirty page list and retransfers them as necessary.

Live Partition Mobility does not make any changes to the network setup on the source and destination systems. It only checks that all virtual networks used by the mobile partition have a corresponding shared Ethernet adapter on the destination system.

The time necessry for an LPM depends on the LPAR memory size, the LPAR memory activity (writes) and the network bandwith between source and destination. (dedicated LPM network with at least 1Gbps is recommended). When running 8 concurrent migrations through a Virtual I/O Server it is recommended to use a 10 Gbps network. (If there are high speed network transfers that can generate extra CPUs on VIOS side, this could also slow down LPM). If multiple mover service partitions are available on either the source or destination systems, it is a good idea to distribute the load among them.

A single HMC can control several concurrent migrations. The maximum number of concurrent migrations is limited by the processing capacity of the HMC and contention for HMC locks. If more LPMs are done concurrently, and the number of migrations grows, the setup time using the GUI can become long. In this case CLI could be faster.

--------------------------------------------------

Requirements for LPM:
(Most of them are checked by validation process)
(At the end of PowerVM Introduction and Configuration Redbook, there is a good list with pictures, where to check each.)

Hardware:
- Power6 or later systems
- System should be managed by at lease one HMC or IVM (if dual HMC, both on same level)
- If different HMCs are used for source and dest., both HMC should on the same network (so they can communicate with each other)
- Migration readiness of source and dest. (for example a server running on battery power is not ready, validation will check this)
- The destination system must have enough processor and memory resources to host the mobile partition


VIOS:
- PowerVM Enterprise Edition with Virtual I/O Server (or dual VIOSes) (version 1.5.1.1 or higher)
- Working RMC connection between HMC and VIOS
- VIOS must be designated as a mover service partition on source and destination
- VIOS must have enough virtual slots on the destination server
- If virtual switch is used, it has to be the same name on source and destination side
- VIOS on both system must have SEA configured to bridge to the same Ethernet network or vNIC can be used too
- VIOS on both system must be capable of providing access to all disk resources to the mobile partition
- If VSCSI is used it must be accessible by both VIO Servers (on source and destination systems)
- If NPIV is used physical adapter max_xfer_size should be the same or greater at dest.side (lsattr -El fcs0 | grep max_xfer_size)


LPAR:
- AIX version must be AIX 6.1 or above
- Working RMC connection between HMC and LPAR
- LPAR has a unique name (cannot be migrated if LPAR name is already used on destination server)
- Migration readiness (LPAR in crashed or failed state cannot be migrated, maybe a reboot is needed, validation will checkt his)
- No physical adapters may be used by the mobile partition during the migration
- No logical host Ethernet adapters
- LPAR should have a virtual Ethernet adapter or vNIC
- The LPAR we want to migrate cannot be a VIO Server
- The mobile partition’s network and disk access must be virtualized by using one or more Virtual I/O Servers
- All virtual networks of the LPAR (VLANs), must be availake on destination server
– The disks used by the mobile partition must be accessed through virtual SCSI, virtual Fibre Channel-based mapping, or both.
- If VSCSI is used no lv or files as backing devices (only LUNs can be mapped)
- If NPIV is used, each VFC client adapter must have a mapping to a VFC server adapter on VIOS
- If NPIV is used, at least one LUN should be mapped to the LPAR`s VFC adapter.
- LPAR is not designated as a redundant error path reporting partition
- LPAR is not part of an LPAR workload group (it can be dynamically removed from a group)
- LPAR is not using huge pages (for inactive migration it can use)
- LPAR is not using Barrier Synchronization Register (BSR) arrays (for inactive migration it can use)


--------------------------------------------------

Some additional notes:

- This is not a replacement for PowerHA solution or a Disaster Recovery Solution.
- By default the partition data is not encrypted when transferred between MSPs.
- Ensure that the logical memory block (LMB) size is the same on the source and destination systems.
  (In ASMI or "lshwres -r mem -m <managed system> --level sys -F mem_region_size")

--------------------------------------------------

lslparmigr -r sys -m <system>                        shows how many concurrent migrations are possible (num_active_migr._supported)
lslparmigr -r lpar -m source_sys                       list status of lpars (lpar_id will be shown as well)  
migrlpar -o v -t dest_sys -m source_sys --id 1         validation of lpar (id) for migration
echo $?                                                if return code is 0, validation was successful

migrlpar -o m -t dest_sys -m source_sys -p lpar1 &     migrating lpar
lssyscfg -r lpar -m source_sys -F name,state           show state

lslparmigr -r lpar -m source_sys -F name,migration_state,bytes_transmitted,bytes_remaining

--------------------------------------------------

nmon -> p (it will show details useful for migration)

--------------------------------------------------

Steps needed for LPM via HMC GUI:

1. choose LPAR on HMC -> Operations -> Mobility
2. choose Validate (Destination system name will be filled automatically or choose one) other settings can be leaved as it is -> Validate
3. after validation, choose slot IDs what will be used on destination system (change other settings if needed)
4. Migrate -> it will show a progress bar and inform you if it is done


--------------------------------------------------

HSCLA27C The operation to get the physical device location for adapter ...
on the virtual I/O server partition ... has failed.



Solution for me was to remove unnecessary NPIV adapters and mappings.
(I had NPIV configured for that LPAR, but no disk was assigned to that LPAR. After removing this NPIV config, LPM was successful.)

--------------------------------------------------

43 comments:

Anonymous said...

Thanks Admin
Abdul

Anonymous said...

Hi Admin ,
If the moving partition has configured wpar , what will happen while migrating to destination? ,and can we move VIOS to another system in LPM ?
Thanks & regards
Abdul

aix said...

Hi Abdul, VIOS cannot be moved to another system. VIOS is the so called "mover service prtition", which means with the help of VIOS, we can move other partitions.
Regarding WPAR, I never tested what will happen with them, but as I guess they should work normally as LPM migrates the whole LPAR.

Anonymous said...

Hi , Thanks for everything
I have an LPAR configured with NPIV and has disks assigned to it via FC, How should I perform the LPM, if I have to remove the NPIV configuration before migration, how do I assign those disks back to the LPAR after the migration, This might be a very silly question but I am new to AIX and I will be assign to do the LPM at work, Any other advise you have will be appreciated.

aix said...

Hi, if you use NPIV, then you should not remove NPIV configuration before LPM. If you check WWPN numbers in HMC, you will see every FC adapter has 2 WWPN numbers. What is important to zone/map LUNs to both WWPN numbers, because during LPM both of them will be used.

Unknown said...

Hi,
What happens to the source LPAR after LPM is finished ? Does it stay on frame or does it get removed ?

Anonymous said...

It gets removed.

Anonymous said...

HI Thanks for everything !! Can you do LPM if you get disk via VSCSI settings , I have LPAR setup I get the OS (rootvg) disk via VSCSI and other disk via NPIV, is LPM possible in this scenario? what would be the steps if so ?

Anonymous said...

hi admin ,

i am bit of confused , like we are only moving profile from one to other physical box. after this we are attaching disk from storage. But how it will auto configure physical adapters and take auto wwwn numbers with same value to assign disk.


thanks for help in adavance.

aix said...

For LPM 2 wwn numbers are needed, so disks are zoned to these 2 wwns, but only 1 is in use. During LPM it auto configures the secondary wwn on the destination server, so it is possible to have the same disks on the other system.

Anonymous said...

Hello ,
is there any command on AIX (LPAR) level, which can be used to determine, whether an LPAR has undergone a LPM operation in the past (isn't on its original managed system any longer) ?

Thanks in advance.

Anonymous said...

Hello,

which WWPNs of the NPIV adapters are in permanent use after the migration ? The original ones or the second (additional) WWPNs, which are used during migration ?

Thanks in advance.

DAC said...

You don't need to do anything aditional... VSCSI is the same as NPIV for LPM... Just ensure the adapters are set as "Not required" for every virtual adapter on the LPAR you want to move, and ensure both the VIOS on the source and on the destination Managed System sees the same disks and the "reserve_policy" attribute of each disk is set to "no_reserve"... All the LUN-to-VSCSI mapping is done by LPM...
Note: Only whole SAN disk could be used on LPM (Either NPIV or VSCSI)...

DAC said...

Does that matters? If you want to be able to do further LPM operations, you have to keep both in SAN zonning...

DAC said...

I leave you a couple of link's that may help on LPM implementatión:
Live Partition Mobility Setup Checklist:
http://www.redbooks.ibm.com/Redbooks.nsf/RedbookAbstracts/tips1184.html?Open
Basic understanding and troubleshooting of LPM:
http://www.ibm.com/developerworks/aix/library/au-LPM_troubleshooting/
.
LPM is REALLY GREAT...
.
Best regards.

Unknown said...

Hi, first of all, thanks for the article, its very rich about how LPM works.
But when i'm going to the pratical part, i'm getting the following error after the part i have to put the remote HMC and fisical system i want to migrate.
Error:
HSCLA318 The migration command issued to the destination HMC failed with the
following error: HSCLA335 The Hardware Management Console for the destination
managed system does not support one or more capabilities required to perform
this operation. The unsupported capability codes are as follows: lpar_uuid

Thanks for the help.

Anonymous said...

Hi All,

Usally Virtual FC will have two WWPNs and this Virtual FC will be mapped to Physical FC on VIOS, though LPM moves the WWPN to the migrating partition, after the migration Virtual FC mapping to Physical FC on destination VIOS is not required ?

aix said...

Hi, During LPM validation it will show which FC adapter layout LPM offers (you can change that) and after LPM no additional configuration is needed from your side.

csr.tech.notes said...

Really great effort.Thanks for patience and updates.

csr.tech.notes said...

Hi All,

Will it be possible LPM with out NPIV ?

Thanks,

Unknown said...

No, NPIV is required to maintain storage connectivity during the migration to a different physical host... It floats the secondary WWPNs on the new Server as it spins up the new LPAR on it.

Unknown said...

Hi All,

I have two set of VIO servers each connected on to different switches within the same fabric.

Ex:- A,B & X,Y are two set of VIO servers
SW1 & SW2 are two switches
A,B is connected to SW1 and X,Y is connected to SW2
SW1 and SW2 is interconnected via ISL(same fabric)

Can we perform the LPM move from A,B to X,Y when both the VIO servers are connected to different switches?

Anonymous said...

Hi All,

I have a query with the below setup for doing LPM.

I have an LPAR which has disks and tape drives mapped to the lpars using npiv with disks using fsc0 and tape drives using fcs3. Disks and tape drives are mapped to both wwn numbers.It's a dual vios setup.

Can we do LPM using command line with the above setup?
Please suggest.

aix said...

Hello, your setup looks OK. You can do a validation first to see for yourself, but I assume it will work.

Anonymous said...

how about lpm from dual vios to frame with single vios?, is that posible?

thanks before.

aix said...

I have never tried. In my opinion it should work, if everything is configured correctly.

Anonymous said...

We have dual vios and lpm was working, but have sustained a HW failure on 1 CEC. At least one VIOS seems entirely healthy and the client LPARs are still running on npiv volumes, but we'd like to lpm off before taking the system down to fix, but getting HSCLA27C. Can we rmdev -Rdl the fscsi device from the client lpars for the move and cfgmgr on the destination to re-find the same wwns and paths for mpio?

Unknown said...

while selecting msp option in vios general setting and clicking on the save button,it is throwing an error "Another operation has changed the configuration. Refresh the web page then try the operation again."
How to resolve this.

jana said...

Is there any network changes to be made before migrating, VLANS are same and Both the Frames are connected to the same HMC.

Jorge said...

Hello.
When you say: "LPAR is not using huge pages". Do you refer to Large Pages also? In AIX Large Pages use 16MB page size.
Is possible use LPM with a LPAR/AIX with Large Pages enabled?

aix said...

Hello, this is not an AIX setting, it refers to a setting in the Managed System. On AIX Large Pages can be used, here is a link about huge pages:
https://www.ibm.com/support/knowledgecenter/TI0003N/p8hat/p8hat_aixviewhgpgmem.htm

Unknown said...

Is it possible to select a Lpar Id in the destination frame?

aix said...

If you do with command on HMC, then it should be possible. Please check the man page of migrlpar command:
"Attributes that can be specified when validating or migrating a single partition:
dest_lpar_id - The partition ID to use on the destination managed system"

Harry said...

This is a great article. I bumped into this whilst i was looking for issue i had yesterday whilst LPM'g.
We have 2 PowerNodes - NodeA and NodeB, running RAC cluster with server1 and server2 respectively.
I was told to move Server1 (from NodeA) onto NodeB whilst RAC was still running.
As soon the LPM completed successfully i got a message on the HMC - lpar restarted itself.
I was a bit taken a back when I was told to LPM the lpar still part of the RAC cluster to be moved on to the same physical because defeat the purpose of having a cluster on a different node.

Any pointers would be really helpful.
Cheers

aix said...

Hi, without knowing any details in your case, my experience with Oracle RAC is, that it has strict timeout requirements, which can trigger reboots. At the end of the LPM, there is a moment when the running application has to be activated on the target server, and this may cause a potential issue with a "sensitive" RAC. But this is just an idea...

Harry said...

Thank you for your reply.
It's, power9, Aix 7.1, v7000 storage, NPIV, HMC930, vios 2.2.6.x

Harry said...

So does that mean that LPM whilst RAC running on the same node is legit?

aix said...

Hi, I have found this IBM paper from 2011, maybe it is not valid anymore, probably you should check it with IBM:
https://levipereira.files.wordpress.com/2011/08/lpm_a_rac_node_july-27-2001.pdf

"At the time of writing LPM operation on an Oracle RAC node is not supported with Database and
Clusterware started.
At the end of the LPM process, during the memory block migration to the target server, a short freeze of
the processes may happen depending on the memory activity. The Oracle cluster mechanism
permanently checks the nodes connectivity and the internal monitoring processes are defined with
timeout delays. Then, the LPM operation may generate a delay and Oracle RAC monitoring may be
disturbed. In the worst case of disruption during the migration, the node will get out of the cluster and will
reboot. This is regular behavior of the Oracle RAC cluster. "

Anonymous said...

Good article ! Thanks !
If I am checking both vfc_mapping and vswitch_mapping during LPM using CLI, is the profile changed on the destination? When does it happen?

Harry said...

Thank you for the link.

Anonymous said...

"changing"

JOY GHOSH said...

Thanks for posting this. It helps me a lot. Could you please let me know how could I start and stop the LPM application which is hosted in AIX lpar.

Anonymous said...

please check in errpt