dropdown menu

BUILD AND CONFIGURE:

POWERHA 7.1: install + cluster config
(In below PowerHA 7.1 config only service IP + appl. scripts were added, no VG and filesystems)

Before installing cluster fileset, check AIX prerequisite:
bos.cluster.rte           7.1.3.30  COMMITTED  Cluster Aware AIX

Install below filesets: smitty install
cluster.adt.es
cluster.doc.en_US.es
cluster.es.client
cluster.es.cspoc
cluster.es.nfs
cluster.es.server
cluster.license
cluster.man.en_US.es

After update to SP4 + reboot

--------------------------

1. setup network + files + repository disk

- config boot ip: chdev -l en0 -a netaddr=192.168.31.97 -a netmask=255.255.255.0 -a state=up (non routable network, for internal cluster traffic.)
- setup /etc/hosts: put all needed IPs there (boot, service, persistent node...)
- setup /etc/cluster/rhosts: put all needed IPs there as well
- recycle clcomd: stopsrc -s clcomd; startsrc -s clcomd
- add pvid for repository disk chdev -l hdisk2 -a pv=yes (repository LUN should be shared and minimum 512MB)


2. create cluster (smitty hacmp)

- create luster: Cluster nodes and netw. --> standard cl. depl. --> setup a cluster: add a name + local node
- add 2nd node to cluster: Manage nodes --> Add a node: choose node + comm. path is the same as node name
- discover network interfaces + disks
- Manage network --> Networks --> Remove a Netw. (remove unnecessary networks)
- Manage repository disks --> add repository disks 1 by 1
- Verify and Synchronize Cluster config
(you cn create netmon.cf file if needed)
- start cluster: smitty clstart: choose both nodes, and start up cluster information daemon: true


3. resource + RG config

- add appl. start/stop scripts: cluster app. resource --> resource --> appl. contr. script --> add (name, script locations, foreground)
- add service IP: cluster app. resource --> resource --> conf./ add service ip --> add ip
- add Resource Group: conf. RG --> Add a RG (nodes is priority order, online on home node, fallover to next, never fallback)
- add resources to RG (change RG): cluster app. resources --> Resource Groups --> Change Resources for RG (add appl. + service IP)
- verify and synchronize cluster (it will start RG as well)


-----------------

After cluster is started, cldump may not work correctly, becuse of missing lines in /etc/snmpdv3.conf

Check if below lines are exist there:
VACM_VIEW defaultView        1.3.6.1.4.1.2.3.1.2.1.5    - included -

smux     1.3.6.1.4.1.2.3.1.2.1.2         gated_password          
smux     1.3.6.1.4.1.2.3.1.2.1.5      clsmuxpd_password

COMMUNITY public    public     noAuthNoPriv 0.0.0.0     0.0.0.0    

more info: http://lparbox.com/how-to/powerha-cluster/21


---------------------------------------
---------------------------------------



HACMP 5.3, POWERHA 6.1: install + cluster config


1.Network and /etc/host file should be set up very thoroughly
(2 different networks, service ip is coming from one of the networks)

Boot interfaces are those that share the service subnet.
Standby interfaces are those that are not on the service subnet.

IPAT via IP REPLACEMENT (service IP is in the same subnet with boot IP)
IPAT via IP ALIASING (all IPs are in different subnets)

IP Aliasing in detail:
All base IP addresses on a node must be on separate subnets. (If heartbeat monitoring over IP aliases is not used)
All service IP addresses must be on a separate subnet from any of the base subnets.
The service IP addresses can all be in the same or different subnets.
The subnet masks must all be the same

IP Replacement in detail:
Base (boot) and service IP addresses on the primary adapter must be on the same subnet.
All base IP addresses on the secondary adapters must be on separate subnets (different from each other and from the primary adapter).

2. Storage, LVM
-shared disk (on both nodes):
    cfgmgr
    chdev -l hdiskX -a pv=yes


-create enhanced concurrent vg:
    lvlstmajor (on both nodes)
    mkvg -C -y'orabckvg' -s'128' '-n' -V 50 hdiskpower50 (autovaryon should be turned off)
    lv, fs if needed

    on the other node: importvg -V 50 -y orabckvg -n hdiskpower50

3. Application scripts:
stop/start application scripts should be created

4. Install HACMP filesets:
-cluster.es
-cluster.es.cspoc
-cluster.license
-cluster.man.en_US.es

reboot


5. Extended Config -> Extended Topology:
    -Config. HACMP Cluster
    -Config HACMP Node (set nodes and ips)

6. Extended Config -> Discover ..
    "/usr/es/sbin/cluster/etc/rhosts" file possibly needed with necessary ips

7. Extended Config -> Extended Topology
    Configure HACMP Networks: give a name and set netmask
    -enable IP adress takeover with Alias? --> yes: if Aliasing
                               --> no: if Replacement   
    Configure HACMP Interface/Device: Add discovered -> comm interface
    ALIASING: 1 network configured and both ...boot_1 and ...boot_2 addreses were used because they are in different subnets
    IP REPLACEMENT:1 network configured and ...boot_1 addresses will be used because ...boot_2 IPs are in different subnet from service IP

(Not necessary, but here we can do verif. and synch. to see if everything is correct:Ext. Config -> Ext. Ver...)

8. RG and resources:
    Extended Config -> Ext. resource:
    -Ext. Res. Group:
    -startup policy:
        if IP repl. with 1 nic per network per node -> Online Using Distribution Policy
         if IP repl. with more nics on a network on the nodes -> anything
        if IP aliasing -> anything
    -Extended Resource:
    -Appl. Server: (start, stop scripts)
    -Service IP..: Configurable on Multiple Nodes -> which network -> F4 to choose
    RG and resources are ready to be related together:
    Extended Config -> Extended RG -> Change/show Resources for a RG:    with F4 add:
    -Service IP
    -Appl Serv
    -VG

9. Synch and Verif

11 comments:

  1. Just a query ..do we need to reboot once we install hacmp filesets.. I never did that.. so just want to double confirm

    ReplyDelete
    Replies
    1. In the HACMP installation guide this is written:

      Completing the installation
      After you install the HACMP software, verify the software installation.
      1. Verify the software installation...
      2. Run the commands lppchk -v and lppchk -c "cluster.*"...
      3. Reboot each HACMP cluster node and client.

      I have seen many times that when I installed something it worked perfectly without reboot...so probably it is the case with your example as well...however to make sure everything will be loaded perfectly at start up, a reboot is a good solution...I think :)

      Delete
  2. At point 2. -a is missing /chdev -l hdiskX -a pv=yes/
    At point 9. 'és' should be 'and' to be more international. :-)

    I like your blog. Keep up with it!
    Best wishes, Laci

    ReplyDelete
    Replies
    1. Thanks for the update! (kösz :-))

      Delete
  3. I think HP serviceguard is better than PowerHA. :)

    ReplyDelete
  4. Feels complicated...:) Can we configure HACMP using CLI only?
    Do you have step by step procedure for setting up a two node HACMP cluster?

    Thanks

    ReplyDelete
    Replies
    1. I see...I have never used CLI for configuration, I think SMIT interfaces are doing a great job and it makes configuration more simple. A very general step-by-step procedure is on this page above...without any experience it is not the best description...for this purpose probably an IBM documentation is better.

      Delete
    2. Power HA 7.1 has the ability to create a cluster with CLI. Please find below an excellent article to create a cluster with CLI
      http://www.ibmsystemsmag.com/aix/administrator/systemsmanagement/clmgr--A-Technical-Reference/

      Delete
  5. Hi AIX
    Thanks for everything , do you have any link where it will have the step by step process for AIX TL upgrade that running in Two node cluster, I will be assigned to do this task in two days, no clue as of yet,

    ReplyDelete
  6. If you have any server build check list ? could you please share ?

    ReplyDelete