dropdown menu


Multipath I/O is a technique that defines more than one physical path between the computer and the storage system, which helps in fault-tolerance and performance-enhancement. For example a disk can  connect to 2 Fibre Channel adapters, so we have 2 paths to the disk. If one path (adapter) fails, I/O can be routed to the remaining adapter without interruption in the application. (If both paths are used simultaneously data can be transported at double speed.)

Patch Control Module (PCM) is responsible for controlling these multiple paths. Each storage device requires a PCM. PCM is storage vendor supplied code that gets control from the device driver to handle the path management.  It can be a separate (3rd party) software (driver) or AIX has a native PCM package, which comes with the base operating system. (Usually people are referring to it as AIXPCM, MPIOPCM or just MPIO).

# lslpp -L devices.common.IBM.mpio.rte
Fileset                      Level  State  Type  Description (Uninstaller)
devices.common.IBM.mpio.rte  C     F    MPIO Disk Path Control Module

As IBM creates storage systems (DS8000...) it provides additional drivers to these storage devices which are separate softwares from AIX, but as mentioned above AIX has a native package which can be used for multipathing as well.

With AIX and multipathing (on IBM storage) we have the following options (in AIX 5.3):
    -classic SDD: (ODM definitions: ibm2105.rte, SDD driver: devices.sdd.53.rte)
    -default PCM (MPIO): it comes with AIX (no other filesets, it is activated only if there are no SDD ODM definitions)
    -SDDPCM: SDD version which uses MPIO and has same commands as SDD
           (ODM def: devices.fcp.disk.ibm2105.mio.rte, SDDPCM driver:devices.sddpcm.53.rte)

As a summary native MPIO is installed as part of the base OS, packaged as a kernel extension. Paths are discovered during system boot (cfgmgr) and disks are created from paths at the same time. No further configuration is required. (Only this native MPIO will be discussed here on this page.)


Path statuses

Path status values of lspath:

enabled:   path is configured and operational. It will be considered when paths are selected for IO.
disabled:  path has been manually disabled and will not be considered when paths are selected for IO. (set back to enabled with 'chpath')
failed:    path had IO failures that have rendered it unusable. It will not be considered when paths are selected for IO.
defined:   path has not been configured into the device driver.
missing:   path was defined in a previous boot, but it was not detected in the most recent boot. (these path  can be recovered with 'cfgmgr')
detected:  path was detected during boot, but it was not configured. (this status should never appear as the lspath output, only during boot)

It is best to manually disable paths before storage maintenance (rmpath). AIX MPIO stops using any disabled or Defined paths, so no error detection or recovery will be done. This ensures that the AIX host does not go into extended error recovery during a scheduled maintenance. After the maintenance is complete, the paths can be re-enabled with cfgmgr. (When disabling multiple paths for multiple LUNs, rmpath is simpler than chpath, as it does not have to be run on a per-disk basis.)

Additional path_status values of lsmpio:
Sel    Path is being selected for I/O operations, for the time when the lsmpio command is to be run.
Rsv    Path has experienced reservation conflict. It might indicate usage or configuration error with multiple hosts accessing the same disk.
Fai    Path experienced a failure. I/O sent on this path is failing.
       In some cases, AIX MPIO leaves one path to the device in Enabled state, even when all paths are experiencing errors.
Deg    Path is in a degraded state. The path was used for I/O, but there were errors, which causing to temporarily avoid the use of the path.
Clo    Path is closed. If only some paths are closed, those paths might have experienced errors. If all paths are closed, device is  closed.
       AIX MPIO periodically attempts to recover closed paths, until the device path is open.


Disk Parameters

# lsattr -El hdisk0
PCM             PCM/friend/vscsi                 Path Control Module        False
algorithm       fail_over                        Algorithm                  True
hcheck_cmd      test_unit_rdy                    Health Check Command       True+
hcheck_interval 60                               Health Check Interval      True+
hcheck_mode     nonactive                        Health Check Mode          True+

algorithm:      (It determines how many paths should be used to transmit I/O)
fail_over:      I/O is routed to one path at a time. If if fails next enabled path is selected. (Path priority determines which path is next)
round_robin:    I/O is distributed to all enabled paths. Paths with same prio. has equal I/O, otherwise higher prio. has higher % of I/O.)
shortest_queue: Similar to round_robin, but when load increases it favors path with fewest active I/O operations. Path priority is ignored.

The fail_over algorithm is always used for virtual SCSI (VSCSI) disks on a Virtual I/O Server (VIOS) client, although the backing devices on the VIOS instance might still use round_robin. Fail_over is also the only algorithm that might be used if using SCSI-2 reserves (reserve_policy=single_path).

hcheck_mode:    (It determines if a path can be used for I/O or not. Paths in Disabled or Missing state are not checked.)
nonactive:      Paths with no active I/O (no ongoing I/O operation) will be checked only.
enabled:        All enabled paths are being checked. (Does not matter if there is an I/O operation or not.)
failed:         Only failed paths are checked.

With nonactive setting paths marked as 'failed' will be checked as well (in addition to 'enabled'). With round_robin and shortest_queue all paths are being used for I/O, so health check command is sent only on failed paths. The default value for all devices is nonactive, and there is little reason to change this value unless business or application requirements dictate otherwise.

hcheck_interval: (It is the interval in seconds when health check will occur to check paths for availability.)
A hcheck_interval = 0 disables path health checking, which means any failed paths require manual intervention to recover that path.

The best practice is that it should be greater than or equal to the rw_timeout (read/write timeout) value on the disks. Better performance is achieved when hcheck_interval is slightly greater than the rw_timeout value on the disks.


smitty mpio

lspath                                lists paths (lspath -l hdisk46)
lspath -l hdisk0 -HF "name path_id parent connection path_status status"    more detailed info about a device (it is like lsdev for devices)
lspath -AHE -l hdisk0 -p vscsi0 -w "810000000000"    display attributes for the given path and connection (-w) (-A is like lsattr for devices)
                                      (if only 1 path exist to a parent device connection can be omitted: lspath -AHE -l hdisk0 -p vscsi0)
lsmpio                                lists addtional info about paths (which path is selected)
lsmpio -Sl hdisk0 | grep Path         shows path statistics (which path was used mostly in the past)

chpath                                changing path state (enabbled, disabled)
chpath -s enabled -l hdisk -p vscsi0  it will set the path to enabled status

rmpath -l hdiskX -p vscsi0 -w 870000000000   put path in defined state  (-w can be omitted if only 1 path exist to parent device)
rmpath -dl hdiskX -p fcsiY           dynamically remove all paths under a parent adapter from a supported storage MPIO device
                                     (-d: deletes, without it puts it to define state)
                                     (The last path cannot be removed, the command will fail if you try to remove the last path)


Failed path handling:
(there were Hitachi disks in Offline (E) state, but they were not unconfigured earlier)
    -lspath | grep -v Enab
    -rmpath -p fscsiX -d
    -cfgmgr -l fcsX
    -lspath | grep -v Enab
    -dlnkmgr view -lu -item


Change adapter setting online:

rmpath -d -p vscsi0                             <--removes all paths from adapt. (rmpath -dl hdisk0 -p vscsi0, it removes only specified path)
rmdev -l vscsi0                                 <--puts adapter into defined state
chdev -l vscsi0 -a vscsi_err_recov=fast_fail    <--change adapter setting (if -P is used it will be activated after reboot)
cfgmgr -l vscsi0                                <--configure back adapter


  1. some very useful information... Thanks.

  2. Hi AIX,

    Whether the Hitachi LUNs can be accessed without dlinkmgr software ie., by using default SDDPCM driver.

    1. The answer to your question is yes and no.
      Yes, Hitachi LUNs can be accessed without dlnkmgr software, but SDDPCM is not for Hitachi LUNs.
      You have to differentiate SDDPCM and PCM (native AIX PCM)
      SDDPCM is for IBM storage only, for this you need to install additional fileset, for example devices.sddpcm53.rte and then you can use pcmpath commands. But for native AIX PCM (MPIO) you don't have to install addtional software. AIX, with a base operating system install, is capable to use some third party devices (ie. Hitachi) as MPIO devices (lspath, rmpath...)
      But you need to ask Hitachi support as well, if the model you have is capable for using this way.

  3. Hi,

    I have received this from Jose, and I would like to share:

    "I was interested in a summary table of my disks so I wrote the below script, pcmpath query essmap was as well ok for me but gave me further info

    display the disk in lspath in “ No hdiskxx size mb No-paths” with no root authority
    p="/usr/sbin/lspath";for i in `$p| awk ' !/Missing/ {print $2}'|sort|uniq `;do echo "$i; `getconf DISK_SIZE /dev/$i` mb; `$p| awk ' !/Missing/ &&/'$i' / {print $2}'|wc -l|sed 's/ //g'`" ; done|cat -n

    1 hdisk0; 70006 mb; 1
    2 hdisk1; 70006 mb; 1
    3 hdisk2; 20480 mb; 4
    4 hdisk25; 20480 mb; 4
    5 hdisk26; 20480 mb; 4"

  4. Very helpful.
    Is there anyway, I can locate the network adapter location by turn-on the light, as I have 3 Network adapter in P550 Machine in production environment. My oslevel is AIX52TL4.
    Thanks in advance.

    1. Hi, on AIX 5.3, if you issue the command: diag -> Task Selection -> Hot Plug Task -> PCI Hot Plug Manager -> Identify a PCI Hot Plug Slot
      This will blink the light at that location on AIX 5.3, you should try, maybe it works on AIX 5.2 as well

    2. Thank for your help. It work.

  5. Hi,

    Please explain briefly about Missing, Failed and Defined states in lspath output.


    1. Hi,

      Indicates that the path is configured, but it has had IO failures that have rendered it unusable. It will not be considered when paths are selected for IO.
      Indicates that the path has not been configured into the device driver.
      Indicates that the path was defined in a previous boot, but it was not detected in the most recent boot of the system.

    2. Hi,

      Indicates that the path is configured and operational. It will be considered when paths are selected for IO.
      Indicates that the path is configured, but not currently operational. It has been manually disabled and will not be considered when paths are selected for
      Indicates that the path is configured, but it has had IO failures that have rendered it unusable. It will not be considered when paths are selected for

      Indicates that the path has not been configured into the device driver.
      Indicates that the path was defined in a previous boot, but it was not detected in the most recent boot of the system.
      Indicates that the path was detected in the most recent boot of the system, but for some reason it was not configured. A path should only have this status
      during boot and so this status should never appear as a result of the lspath command.
      Virender Kumar

  6. hello,

    if I have the command pcmpath then I have everything right sddpcm? stg ds8k

    1. hi, I would say yes...if it works correctly.

  7. i am having MPIO on vio , i use lspath display 1 of the SAN disk in vio server.
    # lspath -l hdisk12 -H -F "name parent connection path_id"
    name parent connection path_id

    hdisk12 fscsi0 500507630700067a,4060400500000000 0
    hdisk12 fscsi0 50050763070b067a,4060400500000000 1
    hdisk12 fscsi1 500507630710067a,4060400500000000 2
    hdisk12 fscsi1 50050763071b067a,4060400500000000 3

    how to explain connection and path_id ?

    1. Hi,
      both of them can be used tu uniquely identify paths, for example with chpath commands.

      The connection information differentiates the multiple path instances that share the same logical parent (adapter). (SCSI ID and LUN ID of the device associated with this path.)

      path_id: Indicates the ID of the path, it is used to uniquely identify a path

  8. Hi,

    I need to findlut the MPIO package version which is installed in AIX. And need to know which version of HBA cards are using?

    Thank you

    1. Hi ,
      try the following :
      # lslpp -L '-a' devices.common.IBM.mpio.rte
      or :
      # smit list_installed

  9. Hi,

    how can we check the disk raid level from AIX 7.1 Machine.

    Note: In AIX 5.3 it is #lsattr -El hdisk8 | grep -i raid


  10. Hi AIX man !
    I have this environment were a HDS disk is connected to a LPAR. The point is, even with mpio installed, there is not 2 paths to the disks. So, my question is, what are the required packages required at AIX to configure multipath?

    AIX 6100-04
    devices.common.IBM.mpio.rte COMMITTED MPIO Disk Path Control Module
    devices.fcp.disk.Hitachi.array.mpio.rte COMMITTED AIX MPIO Support for Hitachi APPLIED AIX MPIO Support for Hitachi APPLIED AIX MPIO Support for Hitachi APPLIED AIX MPIO Support for Hitachi APPLIED AIX MPIO Support for Hitachi APPLIED AIX MPIO Support for Hitachi
    devices.fcp.disk.Hitachi.modular.mpio.rte COMMITTED AIX MPIO Support for Hitachi APPLIED AIX MPIO Support for Hitachi
    devices.common.IBM.mpio.rte COMMITTED MPIO Disk Path Control Module

  11. Hi All
    I have a question ? I am trying to install Hitachi software 6001 which is already their in my . file after running everything the output is coming failed.
    This is what i am getting can anyone tell me where i am doing mistake ?

    cannot open /output/hitachi/odm/mpio/6000/HTC_MPIO_Modular_ODM_6000I: No such file or directory
    Please mount volume 1 on /output/hitachi/odm/mpio/6000/HTC_MPIO_Modular_ODM_6000I
    ...and press Enter to continue installp: An error occurred while running the restore command.
    Use local problem reporting procedures.

    installp: CANCELED software for:


  12. How to collect the MPIO related error logs/ event logs on AIX?

  13. Hi Admin,
    please could you help me to understand the difference between MPIO and SDDPCM .
    also, what are the advantages for moving from MPIO/SDD to SDDPCM ?

    Thanks in adv.

  14. I have have question about reserve_policy=single_path . If this setting is configured, does it stops from other HBA loging (PLOGI) on to storage?

  15. Is there any timeouts in FC adapter (fcsX) or device driver (fscsiX) that we can tune?
    we have a need to extend the time that AIX spent on the alternate path when failover.
    We observed AIX failed quickly when the LUNs on the alternate path were not yet up,
    and when AIX failed and bubbled up the error to the application, the application failed.
    In other OSes, we can extend this duration to 300 seconds but in AIX we do not know
    what it is and what is the default value. lsattr -El fcsX or fscsiX do not show any
    relevant attributes.


    1. Probably there are some parameters in MPIO which could help, I suggest checking MPIO best practices: https://www.ibm.com/developerworks/aix/library/au-aix-mpio/

  16. Thank you very much for the reply.

    I came across the following statement in the page you mentioned:


    "AIX implements an emergency last gasp health check to recover paths when needed. If a device has only one non-failed path and an error is detected on that last path, AIX sends a health check command on all of the other failed paths before retrying the I/O, regardless of the health check interval setting. This eliminates the need for a small health check interval to recover paths quickly. If there is at least one good path, AIX discovers it and uses it before failing user I/O, regardless of the health check interval setting."

    From the traces we have, since the primary path was gone (rebooted and switch sent RSCN), the alternate path was still alive just the LUNs were in transition (not yet active), AIX checked the state of LUNs (using Test Unit Ready first, then resent 8 failed IOs, then sent Start/Stop Unit and 8 failed IOs, then repeat the Start/Stop Unit and 8 IOs sequences), after N seconds, AIX failed the IOs and the application on AIX failed. We yet to capture a longer traces which shows the duration of this check, but we need to know what timeout it is, what is the default value, can it be extended. We need AIX to check at most 300 seconds so the application on AIX can survive the LUNs takeover/failback. Thanks.

  17. From the application log, the entire LUNs takeover took around 80 seconds, and AIX spent 15-20 seconds on checking the primary path before failover to alternate path, so the AIX spent less than 60 seconds checking the alternate path (failed user IOs after that). The length of time for LUNs takeover varies, but in our test we used a predefined configuration for verification. Thanks.