dropdown menu

Showing posts with label POWERVC. Show all posts
Showing posts with label POWERVC. Show all posts

POWERVC - NOVALINK


Novalink

Novalink is a sort of "replacement" of the HMC. In a usual installation all Openstack services (Neutron, Cinder, Nova etc.) were running on the PowerVC host. For example the Nova service required 1 process for each Managed System:

# ps -ef | grep [n]ova-compute
nova       627     1 14 Jan16 ?        06:24:30 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_10D5555.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_10D5555.log
nova       649     1 14 Jan16 ?        06:30:25 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_65E5555.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_65E5555.log
nova       664     1 17 Jan16 ?        07:49:27 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_1085555.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_1085555.log
nova       675     1 19 Jan16 ?        08:40:27 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_06D5555.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_06D5555.log
nova       687     1 18 Jan16 ?        08:15:57 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_6575555.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_6575555.log

Beside the extra load, all PowerVC actions had to go through the HMC. PowerVC and HMC were single point of contact for every action, and this could cause slowness in large environments . In 2016 IBM came up with a solution, that a special LPAR on each Managed System could do all those actions what usually an HMC would do. This special LPAR is called Novalink. So it means if this special LPAR is created on all Managed Systems, then PowerVC will stop querying the HMC and will query dierctly the Novalink LPARs, where additionally some Openstack services are alos running (Nova, Neutron, Ceilometer). It is a Linux LPAR (currently Ubuntu or RHEL) which has a CLI and an API.

---------------------------------------------------------------------

Novalink Install/Update

+--------------------------------+ Welcome +---------------------------------+
|                                                                            |
| Welcome to the PowerVM NovaLink Install wizard.                            |
|                                                                            |
| (*) Choose to perform an installation.                                     |
| This will perform an installation of the NovaLink partition, its core      |
| components and REST APIs, and all needed Virtual I/O servers.              |
|                                                                            |
| ( ) Choose to repair a system.                                             |
| This will repair the system by performing a rescue/repair of existing      |
| Virtual I/O servers and NovaLink partitions.                               |
| Choose this option if PowerVM is already installed but is corrupted        |
| or there is a failure.                                                     |
|                                                                            |
|                                                                            |
| <Next> <Cancel>                                                            |
|                                                                            |
+----------------------------------------------------------------------------+
<Tab>/<Alt-Tab> between elements | <Space> selects | <F12> next screen


Novalink is a standard LPAR whose I/O is provided by the VIOS (therefore no physical I/O is required) with a special permission bit to enable PowerVM Management authority. If you install the NovaLink environment on a new managed system, the NovaLink installer creates the NovaLink partition automatically. It creates the Linux and VIOS LPARs and installs the operating systems and the NovaLink software. It creates logical volumes from the VIOS rootvg for the NovaLink partition. (The VIOS installation files (extracted mksysb files from VIOS DVD iso) needs to be added to the Novalink installer manually: https://www.ibm.com/support/knowledgecenter/POWER8/p8eig/p8eig_creating_iso.htm)


If you install the NovaLink software on a system that is already managed by a HMC, use the HMC to create a Linux LPAR  and set the powervm_mgmt_capable flag to true. (the NovaLink partition must be granted the capability of PowerVM management)
$ lssyscfg -m p850 -r lpar --filter "lpar_ids=1"
name=novalink,lpar_id=1,lpar_env=aixlinux,state=Running,resource_config=1,os_version=Unknown,logical_serial_num=211FD2A1,default_profile=default,curr_profile=default,work_group_id=none,shared_proc_pool_util_auth=0,allow_perf_collection 0,power_ctrl_lpar_ids=none,boot_mode=norm,lpar_keylock=norm,auto_start=1,redundant_err_path_reporting=0,rmc_state=active,rmc_ipaddr=129.40.226.21,time_ref0,lpar_avail_priority=127,desired_lpar_proc_compat_mode=default,curr_lpar_proc_compat_mode=POWER8,suspend_capable=0,remote_restart_capable0,simplified_remote_restart_capable=0,sync_curr_profile=0,affinity_group_id=none,vtpm_enabled=0,powervm_mgmt_capable=0
$ chsyscfg -m seagull -r lpar -i lpar_id=1,powervm_mgmt_capable=1

powervm_mgmt_capable flag is valid for Linux partitions only:
0 - do not allow this partition to provide PowerVM management functions
1 - enable this partition to provide PowerVM management functions


PowerVM NovaLink by default installs Ubuntu, but also supports RHEL. The installer provides an option to install RHEL after the required setup or configuration of the system completes. For easier installation of PowerVM NovaLink on multiple servers, set up a netboot (bootp) server to install PowerVM NovaLink from a network.

Installation log files are in /var/log/pvm-install and the NovaLink installer creates an installation configuration file /var/log/pvm-install/novalink-install.cfg (which can be used if we need to restore Novalink partition). Updating PowerVM NovaLink is currently driven entirely through Ubuntu’s apt package system

---------------------------------------------------------------------

Novalink and HMC

NovaLink provides a direct connection to the PowerVM server rather than proxying through an HMC. For example a VM create request in PowerVC goes directly to NovaLink, which then communicates with PowerVM. This allows improved scalability (from 30 to 200+ servers), better performance, and better alignment with OpenStack.

Hosts can be managed by NovaLink only (without HMC), or can be co-managed (Novalink and HMC together). In this co-managed setup either NovaLink or the HMC is the master. Both of them have read-access to partition configuration, but only the master can make changes to the system. Typically NovaLink will be the co-management master, however if a task has to be done from the HMC (like firmware upgrade), we can explicitly request master authority to the HMC, perform the action, and then give back the authority to NovaLink.


HMC: saves the LPAR configuration in the FSP NVRAM also uses FSP lock mechanism and receives event from FSP/PHYP
NovaLink: receives events from PHYP, it is not aware of FSP, does not receive FSP events

In co-management mode there are no partition profiles. In OpenStack, the concept of a flavor is similar to profiles, and these are all managed by OpenStack, not the HMC or NovaLink. For example, you can activate a partition with the current configuration, but not with a profile.

To update the firmware on a system that is managed by only NovaLink, use the ldfware command on the service partition. If the system is co-managed by NovaLink and HMC, firmware updates can be performed only from the HMC. The HMC must be set to the master mode to update the firmware. After firmware update is finished master mode can be set back to Novalink. (The current operation has to be finished before the change completes, or force option is also possible.)

In HMC CLI:
$ chcomgmt -m <managed_system> -o setmaster -t norm              <--set HMC to be master on the specified Man. Sys.
$ chcomgmt -m <managed_system> -o relmaster                      <--set Novalink to be master again

In Novalink CLI:
$ pvmctl sys list -d master                                      <--list master (-d: display)
$ pvmctl <managed_system> set-master                             <--set Novalink to be master

---------------------------------------------------------------------

Novalink partition and services

NovaLink is not a part of the PowerVC, but the two technologies work closely together. If NovaLink is installed on a host, even if an HMC is connected to it, PowerVC must manage that host through the NovaLink partition. The Novalink LPAR (with the installed software packages) provides Openstack services and it can perform virtualization tasks in the PowerVM/Hypervisor layer. The following OS packages are providing these functions in NovaLink:
-ibmvmc-dkms: this is the device driver kernel module that allows NovaLink to talk to the Hypervisor
-pvm-core: this is the base novalink package. It primarily provides a shared library to the REST server.
-pvm-rest-server: this is the java webserver used to run the REST API service
-pvm-rest-app: this is the REST APP that provides all the REST APIs and communicates with pvm-core
-pypowervm: pypowervm library provides a Python-based API wrapper for interaction with the PowerVM API
-pvm-cli: this provides the python based CLI (pvmctl)

A meta package called pvm-novalink ensures dependencies between all these packages. When updating, just update pvm-novalink and it will handle the rest.

NovaLink contains two system services that should always be running:
- pvm-core
- pvm-rest

If you are not able to complete tasks on NovaLink, verify whether these services are running. Use the systemctl command to view the status of these services and to stop, start, and restart these services. (Generally restarting pvm-core will cause pvm-rest to also restart.)
# systemctl status pvm-core / pvm-rest
# systemctl stop pvm-core / pvm-rest
# systemctl start pvm-core / pvm-rest
# systemctl restart pvm-core / pvm-rest


With these installed packages NovaLink provides 2 main services: Openstack services and Novalink Core services:


OpenStack Services
- nova-powervm: Nova is the compute service of Openstack. This handles VM managements (creating VMs, add/remove CPU/RAM...)
- networking-powervm: this is the network service of OpenStack (Neutron). Provides functions to manage SEA, VLANs ...
- ceilometer-powervm: Ceilometer is the monitoring service of Openstack. Collects monitoring data for CPU, network, memory, and disk usage

These services are using the pypowervm library, which is a python based library that interacts with the PowerVM REST API.


NovaLink Core Services 
These services are communicating with the PHYP and the VIOS, these provide direct connection to the managed system.
- REST API: It is based on the API that is used by the HMC. It also provides a python-based software development kit.
- CLI: It provides shell interaction with PowerVM. It is based on python as well.

---------------------------------------------------------------------

RMC with PowerVM NovaLink

RMC connection between NovaLink and each LPAR is routed through a dedicated internal virtual switch (mandatory name is MGMTSWITCH) and the virtual network is using the PVID 4094.

It uses an IPv6 link, and VEPA mode has to be configured, so LPARs can NOT communicate directly to each other, network traffic will go out to the switch first. After it is configured correctly NovaLink and the client LPARs can communicate for DLPAR and mobility. The minimum RSCT version to use RMC with Novalink is 3.2.1.0. The management vswitch is required for LPARs deployed using PowerVC, however the HMC can continue using RMC through the existing mechanisms.

The LPARs are using virtual Ethernet adapters to connect to NovaLink through a virtual switch. The virtual switch is configured to communicate only with the trunk port. An LPAR can therefore use this virtual network only to connect with the NovaLink partition. LPARs can connect with partitions other than the NovaLink partition only if a separate network is configured for this purpose.

---------------------------------------------------------------------

Novalink CLI (pvmctl, viosvrcmd)

The NovaLink command-line interface (CLI) is provided by the Python based pvm-cli package. It uses the pvmctl and viosvrcmd commands for most operations. Execution of the pvmctl command is logged in the file /var/log/pvm/pvmctl.log and commands can only be executed by users who are in the pvm_admin group. The admin user (i.e. padmin) is added automatically to the group during installation.

pvmctl

It runs operations against an object: pvmctl OBJECT VERB

Supported OBJECT types:
ManagedSystem (sys)
LogicalPartition (lpar or vm)
VirtualIOServer (vios)
SharedStoragePool (ssp)
IOSlot (io)
LoadGroup (lgrp)
LogicalUnit (lu)
LogicalVolume (lv)
NetworkBridge (nbr or bridge)
PhysicalVolume (pv)
SharedEthernetAdapter (sea)
VirtualEthernetAdapter (vea or eth)
VirtualFibreChannelMapping (vfc or vfcmapping)
VirtualMediaRepository (vmr or repo)
VirtualNetwork (vnet or net)
VirtualOpticalMedia (vom or media)
VirtualSCSIMapping (scsi or scsimapping)
VirtualSwitch (vswitch or vsw)

Supported operations (VERB) example:
logicalpartition (vm,lpar) supported operations: create, delete, list, migrate, migrate-recover, migrate-stop, power-off, power-on, restart, update
IOSlot (io) supported operations: attach, detach, list

---------------------------------------------------------------------

pvmctl listing objects

$ pvmctl lpar list
Logical Partitions
+----------+----+----------+----------+----------+-------+-----+-----+
| Name     | ID | State    | Env      | Ref Code | Mem   | CPU | Ent |
+----------+----+----------+----------+----------+-------+-----+-----+
| novalink | 2  | running  | AIX/Lin> | Linux p> | 2560  | 2   | 0.5 |
| pvc      | 3  | running  | AIX/Lin> | Linux p> | 11264 | 2   | 1.0 |
| vm1      | 4  | not act> | AIX/Lin> | 00000000 | 1024  | 1   | 0.5 |
+----------+----+----------+----------+----------+-------+-----+-----+

$ pvmctl lpar list --object-id id=2
Logical Partitions
+----------+----+---------+-----------+---------------+------+-----+-----+
| Name     | ID | State   | Env       | Ref Code      | Mem  | CPU | Ent |
+----------+----+---------+-----------+---------------+------+-----+-----+
| novalink | 2  | running | AIX/Linux | Linux ppc64le | 2560 | 2   | 0.5 |
+----------+----+---------+-----------+---------------+------+-----+-----+

$ pvmctl lpar list -d name id state --where LogicalPartition.state=running
name=novalink,id=2,state=running
name=pvc,id=3,state=running

$ pvmctl lpar list -d name id state --where LogicalPartition.state!=running
name=vm1,id=4,state=not activated
name=vm2,id=5,state=not activated

---------------------------------------------------------------------

pvmctl creating objects:

creating an LPAR:
$ pvmctl lpar create --name vm1 --proc-unit .1 --sharing-mode uncapped --type AIX/Linux --mem 1024 --proc-type shared --proc 2
$ pvmctl lpar list
Logical Partitions
+-----------+----+-----------+-----------+-----------+------+-----+-----+
| Name      | ID | State     | Env       | Ref Code  | Mem  | CPU | Ent |
+-----------+----+-----------+-----------+-----------+------+-----+-----+
| novalink> | 1  | running   | AIX/Linux | Linux pp> | 2560 | 2   | 0.5 |
| vm1       | 4  | not acti> | AIX/Linux | 00000000  | 1024 | 2   | 0.1 |
+-----------+----+-----------+-----------+-----------+------+-----+-----+


creating a virtual ethernet adapter:
$ pvmctl vswitch list
Virtual Switches
+------------+----+------+---------------------+
| Name       | ID | Mode | VNets               |
+------------+----+------+---------------------+
| ETHERNET0  | 0  | Veb  | VLAN1-ETHERNET0     |
| MGMTSWITCH | 1  | Vepa | VLAN4094-MGMTSWITCH |
+------------+----+------+---------------------+

$ pvmctl vea create --slot 2 --pvid 1 --vswitch ETHERNET0 --parent-id name=vm1

$ pvmctl vea list
Virtual Ethernet Adapters
+------+------------+------+--------------+------+-------+--------------+
| PVID | VSwitch    | LPAR | MAC          | Slot | Trunk | Tagged VLANs |
+------+------------+------+--------------+------+-------+--------------+
| 1    | ETHERNET0  | 1    | 02224842CB34 | 3    | False |              |
| 1    | ETHERNET0  | 4    | 1A05229C5DAC | 2    | False |              |
| 1    | ETHERNET0  | 2    | 3E5EBB257C67 | 3    | True  |              |
| 1    | ETHERNET0  | 3    | 527A821777A7 | 3    | True  |              |
| 4094 | MGMTSWITCH | 1    | CE46F57C513F | 6    | True  |              |
| 4094 | MGMTSWITCH | 2    | 22397C1B880A | 6    | False |              |
| 4094 | MGMTSWITCH | 3    | 363100ED375B | 6    | False |              |
+------+------------+------+--------------+------+-------+--------------+

---------------------------------------------------------------------

pvmctl updating/deleting objects

Update the desired memory on vm1 to 2048 MB:
$ pvmctl lpar update –i name=vm1 –-set-fields PartitionMemoryConfiguration.desired=2048 
$ pvmctl lpar update –i id=2 –s PartitionMemoryConfiguration.desired=2048


Delete an LPAR:
$ pvmctl lpar delete -i name=vm4
[PVME01050010-0056] This task is only allowed when the partition is powered off.
$ pvmctl lpar power-off -i name=vm4
Powering off partition vm4, this may take a few minutes.
Partition vm4 power-off successful.
$ pvmctl lpar delete -i name=vm4

---------------------------------------------------------------------

Additional commands

$ pvmctl vios power-off -i name=vios1            <--shutdown VIOS
$ pvmctl lpar power-off –-restart name=vios1     <--restart LPAR

$ mkvterm –m sys_name –p vm1                     <--open a console

---------------------------------------------------------------------

viosvrcmd

viosvrcmd runs VIOS commands from Novalink LPAR on the specified VIO server. The underlying RMC is used to pass over the viosvrcmd command to the VIO server.

An example: 
Allocating a logical unit from an existing SSP on the VIOS at partition id 2. The allocated logical unit is then mapped to a virtual SCSI adapter in the target LPAR.

$ viosvrcmd --id 2 -c "lu -create -sp pool1 -lu vdisk_vm1 -size 20480"    <--create a Logical Unit on VIOS (vdisk_vm1)
Lu Name:vdisk_vm1
Lu Udid:955b26de3a4bd643b815b8383a51b718

$ pvmctl lu list
Logical Units
+-------+-----------+----------+------+------+-----------+--------+
| SSP   | Name      | Cap (GB) | Type | Thin | Clone     | In use |
+-------+-----------+----------+------+------+-----------+--------+
| pool1 | vdisk_vm1 | 20.0     | Disk | True | vdisk_vm1 | False |
+-------+-----------+----------+------+------+-----------+--------+

$ pvmctl scsi create --type lu --lpar name=vm1 --stor-id name=vdisk_vm1 --parent-id name=vios1

---------------------------------------------------------------------

Backups

PowerVM NovaLink automatically backs up hypervisor (LPAR configurations) and VIOS configuration data by using cron jobs. Backup files are stored in the /var/backups/pvm/SYSTEM_MTMS/ directory. VIOS configuration data is copied from the VIOS (/home/padmin/cfgbackups) to Novalink.

$ ls –lR /var/backups/pvm/8247-21L*03212E3CA
-rw-r----- 1 root pvm_admin 2401 Jun 1 00:15 system_daily_01.bak
-rw-r----- 1 root pvm_admin 2401 May 30 00:15 system_daily_30.bak
-rw-r----- 1 root pvm_admin 2401 May 31 00:15 system_daily_31.bak
-rw-r----- 1 root pvm_admin 2401 Jun 1 01:15 system_hourly_01.bak
-rw-r----- 1 root pvm_admin 2401 Jun 1 02:15 system_hourly_02.bak
-rw-r----- 1 root pvm_admin 4915 Jun 1 00:15 vios_2_daily_01.viosbr.tar.gz
-rw-r----- 1 root pvm_admin 4914 May 30 00:15 vios_2_daily_30.viosbr.tar.gz
-rw-r----- 1 root pvm_admin 4910 May 31 00:15 vios_2_daily_31.viosbr.tar.gz
-rw-r----- 1 root pvm_admin 4911 Jun 1 00:15 vios_3_daily_01.viosbr.tar.gz
-rw-r----- 1 root pvm_admin 4911 May 30 00:15 vios_3_daily_30.viosbr.tar.gz
-rw-r----- 1 root pvm_admin 4910 May 31 00:15 vios_3_daily_31.viosbr.tar.gz
-rw-r----- 1 root pvm_admin 4909 Jun 1 01:15 vios_3_hourly_01.viosbr.tar.gz
-rw-r----- 1 root pvm_admin 4909 Jun 1 02:15 vios_3_hourly_02.viosbr.tar.gz

The hypervisor (partition configuration) backup can be manually initiated by using the bkprofdata command:
$ sudo bkprofdata –m gannet –o backup
$ ls –l /etc/pvm
total 8
drwxr-xr-x 2 root root 4096 May 26 17:32 data
-rw-rw---- 1 root root 2401 Jun 2 17:05 profile.bak
$ cat /etc/pvm/profile.bak
FILE_VERSION = 0100
CONFIG_VERSION = 0000000000030003
TOD = 1464901557123
MTMS = 8247-21L*212E3CA
SERVICE_PARTITION_ID = 2
PARTITION_CONFIG =
lpar_id\=1,name\=novalink_212E3CA,lpar_env\=aixlinux,mem_mode\=ded,min_mem\=2048,desired_mem\=2560,max_mem\=16384,hpt_ratio\=6,mem_expansion\=0.00,min_procs\=1,desired_procs\=2,max_procs\=10,proc_mode\=shared,shared_proc_pool_id\=0,sharing_mode\=uncap,min_proc_units\=0.05,desired_proc_units\=0.50,max_proc_units\=10.00,uncap_weight\=128,allow_perf_collection\=0,work_group_id\=none,io_slots\=2101001B/none/0,"virtual_eth_adapters\=3/1/1//0/0/0/B2BBCA66F6F1/all/none,6/1/4094//1/0/1/EA08E1233F8A/all/none","virtual_scsi_adapters\=4/client/2/vios1/2/0,5/client/3/vios2/2/0",auto_start\=1,boot_mode\=norm,max_virtual_slots\=2000,lpar_avail_priority\=127,lpar_proc_compat_mode\=default
PARTITION_CONFIG =
lpar_id\=2,name\=vios1,lpar_env\=vioserver,mem_mode\=ded,min_mem\=1024,desired_mem\=4096,max_mem\=16384,hpt_ratio\=6,mem_expansion\=0.00,min_procs\=2,desired_procs\=2,max_procs\=64,proc_mode\=shared,shared_proc_pool_id\=0,sharing_mode\=uncap,min_proc_units\=0.10,desired_proc_units\=1.00,max_proc_units\=10.00,uncap_weight\ 255,allow_perf_collection\=0,work_group_id\=none,"io_slots\=21010013/none/0,21030015/none/0,2104001E/none/0","virtual_eth_adapters\=3/1/1//1/0/0/36BACB2677A6/all/none,6/1/4094//0/0/1/468CA1242EC8/all/none",virtual_scsi_adapters\=2/server/1/novalink_212E3CA/4/0,auto_start\=1,boot_mo
...
...


The VIOS configuration data backup can be manually initiated by using the viosvrcmd –id X –c “viosbr” command:
$ viosvrcmd –-id 2 –c “viosbr –backup –file /home/padmin/cfgbackups/vios_2_example.viosbr”
Backup of this node (gannet2.pbm.ihost.com.pbm.ihost.com) successful
$ viosvrcmd --id 2 -c "viosbr -view -file /home/padmin/cfgbackups/vios_2_example.viosrb.tar.gz"


$ viosvrcmd –-id X –c “backupios –cd /dev/cd0 –udf -accept”             <--creates bootable media
$ viosvrcmd –-id X –c “backupios –file /mnt [-mksysb]”                  <--for NIM backup on NFS (restore with installios (or mksysb)
$ viosvrcmd –-id X –c “backupios –file /mnt [-mksysb] [-nomedialib]”    <--exclude optical media

---------------------------------------------------------------------

POWERVC - SSP


SSP Administration

PowerVC and SSP

SSP is a fully supported storage provider in PowerVC. SSP was developed much earlier than PowerVC, but its shared setup can fit very well to the cloud nature of PowerVC. After creating an SSP (few clicks in HMC GUI), PowerVC (which is connected to the HMC) will recognize it automatically and without any additional tasks we can start to create LUs and deploy VMs. (There is no strict distinction, but the word LUN is used more for physical volumes attached to VIOS (lspv), and the word LU for virtual disks created in SSP.)

What is important that each VIO server, which is part of the SSP cluster, has to see the same LUNs. The virtual disks (LUs), which are created in SSP, can be found as files in a special filesystem: /var/vio/SSP/SSP_Cluster_1/D_E_F_A_U_L_T_061310. This fs is created during SSP creation and it is available on each VIO server. (These LUs are basically files in that filesystem, and because these LUs are thin provisioned, these files are so called 'sparse files'). SSP commands can be run as padmin on each VIOS.

/var/vio/SSP/SSP_Cluster_1/D_E_F_A_U_L_T_061310/VOL1  <--contains LUs available in PowerVC
/var/vio/SSP/SSP_Cluster_1/D_E_F_A_U_L_T_061310/IM    <--contains Images available in PowerVC

cluster -list                                         <--lists available SSP clusters
cluster -status -clustername <CLUSTER_NAME> -verbose  <--it will show primary database node as well (grep -p DBN)

lu -list                                              <--lists SSP LUs
snapshot -list                                        <--lists snapshots (images in PowerVC)
lssp -clustername <Cluster_Name> -sp <SSP_Name> -bd   <--old command to list LUs (bd is backing device)


-------------------------------------

Adding a new LUN to SSP

If we want to increase the available free space in SSP we need to add a new LUN to it.

1. request a new LUN from SAN team      <--should be a shared LUN assigned to each VIO server in SSP
2. cfgmr (or cfgdev as padmin)          <--bring up new disk on all VIO, make sure it is the same disk
3. chdev -l hdisk$i -a queue_depth=32   <--set any parameters needed
4. in HMC GUI add new disk to SSP       <--on HMC SSP menu choose SSP, then check mark SSP (System Default) --> Action --> Add Capacity

After that PowerVC  will automatically recognize, no steps are needed in PowerVC. df -g can be used to monitor available free space in /var/vio/SSP/SSP_Cluster_1/D_E_F_A_U_L_T_061310.

-------------------------------------

Removing leftover LUs from SSP

Sometimes a volume has been deleted in PowerVC, but they are still visible in SSP. These can be removed from SSP commands with lu -remove ... What is important, to make sure these LUs are really not used by anything. (LUs can be used by VMs and also by Images!!!)

1. check LU list on HMC, PowerVC and VIO                <--HMC and PowerVC GUI lists LUs, on VIO 'lu -list' can be used
2. lu -list -attr provisioned=false                     <--on VIO lists LUs which are not assigned to any LPARs
3. lu -remove -clustername <Cluster_Name> -luudid <ID>  <--remove a LU from SSP

If there are many LUs this for cycle can be used as well:
$ for i in `lu -list -attr provisioned=false | awk '{print $4}'`; do lu -remove -clustername SSP_Cluster_1 -luudid $i; done

-------------------------------------

PowerVC Images, Snapshots, LUs and SSP

When an Image is created in PowerVC sometimes a file is created in SSP, sometimes not. It depends on how Image creation has been started. This "inconsistency" can lead to problems when we want to find that specific file in SSP which contains our Image in PowerVC.

Images and LUs are stored in 2 different places in SSP:
# ls -l /var/vio/SSP/SSP_Cluster_1/D_E_F_A_U_L_T_061310
drwxr-xr-x    2 root     system         8192 Feb 19 11:19 IM       <--Images (these are LUs, but "lu" commands will not list them)
drwxr-xr-x    8 root     system          512 Feb 03 2018  VIOSCFG
drwxr-xr-x    2 root     system        32768 Feb 20 08:47 VOL1     <--LUs         


Volume and LU:
Both are referring to the same disk, just Volume is used in PowerVC, and LU is used in SSP commands. When we create a PowerVC Volume, in the background PowerVC will create a new disk in SSP. The end result is a file in the VOL1 directory (/var/vio/SSP/SSP_Cluster_1/D_E_F_A_U_L_T_061310/VOL1). The lu ... SSP commands will list files in VOL1 directory (not in IM).

Image and Snapshot:
PowerVC images are needed to create new AIX VMs. The word Image is used only in PowerVC (not in SSP commands). Images can be created in 2 different ways:
-capturing a VM (in PowerVC VMs menu): a new file is created in IM directory and it takes minutes
-creating from a Volume (in PowerVC Images menu): no new files are created (same LU is used from VOL1), it takes seconds

When an Image is ready, we can use it to create (deploy) new AIX VMs. During deployment a "snapshot" is created. A snapshot is a link to the disk from which it was created, and this snapshot is the "disk" of the new AIX VM. The actual size of this AIX VM in SSP is very minimal (sometimes I could not recognize the change), because it contains the same data as the Image.

When we create a Volume in PowerVC, a LU is created in SSP. This LU can exist without any assignment or dependency. When we modify this LU to be an Image in PowerVC it is still "independent". But when we deploy AIX VMs from this Image, snapshots will be created which are depending on the LU. It means we cannot remove this LU (or Image) file until there are snapshots which are referring to it. (In PowerVC GUI, we can remove images at any time, but this is a "fake" removal. PowerVC will show that it deleted successfully an image, but in the background if snapshots from that image still exist, no free space will be reclaimed, the used storage space will be the same. If we check "snapshot -list" command we could be surprised to find a lots of images in the output, which does not exist in PowerVC anymore, but still exist in SSP.)

-------------------------------------

snapshot -list:

This command displays all Images and its Snapshots. It does not matter how Images were created (from a Volume or from a VM) it will be listed. The output has 2 parts. The first part (which start with "Lu Name") will list Images which are created from Volumes (in VOL1 directory) and its Snapshots. The second part (which start from "Lu(Client Image)Name") will lists Images in IM directory and its Snapshots.

$ snapshot -list
Lu Name                  Size(mb)    ProvisionType    %Used Unused(mb)    Lu Udid
volume-bb_dd_61-c20cc6.. 153600      THIN             0%    153609        9191dee4a3ba... <--this is a Volume and Image (in VOL1)
Snapshot
72a40f070213e2450b8d19672f22a5dcIMSnap                                                    <--this is a VM, shows LUN id (without IBMsnap)

Lu(Client Image)Name     Size(mb)    ProvisionType     %Used Unused(mb)   Lu Udid
volume-Image_7241-5c5b80ac-170b153   THIN              0%    153609       48004e96ecc2... <--this is an image in IM
     Snapshot
     c397bc118de59c4592429b2eb0bba738IMSnap                                               <--this is a VM with LUN id
     618afc4622c5286808a8173468ae161bIMSnap                                               <--this is a VM with LUN id

-------------------------------------

lu -list

This command will list all LUs and if a LU is functioning as an Image (Images in VOL1 dir) its Snapshots will be also displayed. Images which are in IM directory (captured from a VM) are not displayed here.

$ lu -list
POOL_NAME: SSP_1
TIER_NAME: System
LU_NAME                 SIZE(MB)    UNUSED(MB)  UDID
volume-aix-central-111~ 153600      84140       81d15130e9a76596ad0b3564973d4912
volume-bb_dd_61-c20cc6~ 153600      153609      9191dee4a3ba8fe3c7753af592027aad          <--this is a Volume and Image (in VOL1)
SNAPSHOTS
72a40f070213e2450b8d19672f22a5dcIMSnap                                                    <--its Snapshot (AIX VM is created from Image)
volume-bb_dd_61_VM-1ce~ 153600      153160      72a40f070213e2450b8d19672f22a5dc          <--LU of the deployed AIX (same IDs)
volume-cluster_1-71eda~ 20480       20473       e12ba154470b2c8bd9a54eb588fc9d2e
volume-cluster_2-c8ab7~ 20480       20432       4e36795de9a3f90710ba995b97d7ccbd

-------------------------------------

Image removal if it is listed in snapshot -list command:

1. $ snapshot -list
….
volume-Image_ls-aix-test8_capture_1_volume_1-fc19d9fd-5dac153600         THIN                 1% 150942         8256526c4e512b54cbdd689d4e1e321a


2. $ lu -remove -luudid 8256526c4e512b54cbdd689d4e1e321a
Logical unit  with udid "8256526c4e512b54cbdd689d4e1e321a" is removed.

After that file will be deleted in IM directory as well

-------------------------------------

Image removal if it is not listed in snapshot commands:

In this case there are files in IM directory, but usual SSP commands will not list them. It is possible to manually remove those files (rm), but it works only if these images (LUNs) are not listed as "LU_UDID_DERIVED_FROM"

1. Checked files in IM direcory:
ls -ltr /var/vio/SSP/SSP_Cluster_1/D_E_F_A_U_L_T_061310/IM
volume-Image_AIX-710404_base2_puppet_capture-445b36df-bbee.b38417a6fd9f8eab66c2f7e6e02818cc

It lists PowerVC images (some were not listed in PowerVC GUI). The characters after the dot (.) showing the LUN id, which can be used in searches.

2. Searching LUN ids in "lu -list verbose":
for i in `ls -ltr /var/vio/SSP/SSP_Cluster_1/D_E_F_A_U_L_T_061310/IM | awk -F'.' '{print $2}'`; do ls /var/vio/SSP/SSP_Cluster_1/D_E_F_A_U_L_T_061310/IM| grep $i; lu -list -verbose| grep -p $i; echo "========="; echo; done

3. Remove files if possible:
If there are no reference for a specific Image file (LUN id) in this verbose command then "rm filename" will work, otherwise, we get this error:
# rm volume-Image_AIX-710404_base_nocloud_capture-0d466084-d9bb.054d6d02cf133e4aef56437f4524f016
rm: 0653-609 Cannot remove volume-Image_AIX-710404_base_nocloud_capture-0d466084-d9bb.054d6d02cf133e4aef56437f4524f016.
Operation not permitted.

-------------------------------------

SSP dd examples

# cd /var/vio/SSP/SSP_Cluster_1/D_E_F_A_U_L_T_061310/IM

To export the volume to a file from SSP:
# dd if=volume-New-SSP-Image-Volume.7e2e5b5738d7adf4be3b64b9b731c2ff of=/tmp/aix7_img bs=1M

To import the volume from a file to SSP:
# dd if=/tmp/aix7_img of=VOL1/volume-New-SSP-Image-Volume.7e2e5b5738d7adf4be3b64b9b731c2ff bs=1M 

-------------------------------------

POWERVC - MAINTENANCE

PowerVC Upgrade/Install

After extracting the content of the downloaded package, the same install script will be used for installing or upgrading PowerVC. This script will recognize if PowerVC is already installed. (If we do an upgrade, one of the steps during the upgrade is to uninstall current PowerVC version and then install the new one.)

0. Backup
  - vmware snapshot


1. Red Hat update if needed (for PowerVC 1.4.4 minimum RHEL 7.7. is needed):
   # cat /etc/redhat-release
  Red Hat Enterprise Linux Server release 7.6 (Maipo)

  # sudo yum check-update
  # sudo yum update -y
  # sudo shutdown -r now

  Updating to a specific release with "releasever" parameter:
  (system should be registered with subscription manager, if releasever is not specified, system will be updated to latest major release)
  # yum --releasever=7.7 update


2. PowerVC  Install/Upgrade
  - download tgz (from ESS)
  - copy to PowerVC node and
  - as root:
  # tar -vzxf …
  # cd <local directory>/powervc-1.4.4.0/
  # ./install

  Upgrade/Install complained about these missing prerequisites:
  - missing python packages: # yum install python-fpconst-0.7.3-12.el7.noarch.rpm python-twisted-core-12.2.0-4.el7.x86_64.rpm python-twisted-web-12.1.0-5.el7_2.x86_64.rpm python-webob-1.2.3-7.el7.noarch.rpm python-webtest-1.3.4-6.el7.noarch.rpm python-zope-interface-4.0.5-4.el7.x86_64.rpm SOAPpy-0.11.6-17.el7.noarch.rpm
  - disabling epel repository: # /usr/bin/yum-config-manager --disable epel
  - disabling ipv6 kernel module: # export ERL_EPMD_ADDRESS=::ffff:127.0.1.1
  - disabling IPv6 entirely: export EGO_ENABLE_SUPPORT_IPV6=N

  If everything is fine, it will do the upgrade… ask license, firewall questions etc, logs can be checked in /opt/ibm/powervc/log
  (as I saw during upgrade it uninstalled current PowerVC, then did a new installation.)
  # if it is a new installation it will ask edition: Standard, Cloud PowerVC Manager
  # License text --> press  1 and  Enter
  # Do you want the IBM PowerVC setup to configure the firewall? 1-Yes or 2-No? 2
  # Continue with the installation: 1-Yes or 2-No? 1


It will take long (about an hour) until it is finished and output will show something like this:
...
...
...
Installation task 7 of 7

Done with cleanup actions.

The validation of IBM PowerVC services post install was successful.

************************************************************
IBM PowerVC installation successfully completed at 2020-01-22T17:41:07+01:00.
Refer to /opt/ibm/powervc/log/powervc_install_2020-01-22-170753.log for more details.

Use a web browser to access IBM PowerVC at
https://powervc.lab.domain.org

Firewall configuration may be required to use PowerVC.
Refer to the Knowledge Center topic 'Ports used by PowerVC'.

************************************************************


================================


Removing and adding back SSP to PowerVC

Once PowerVC behaved strangely when Image or Volumes were created (it was hanging, new items did not show up) and IBM recommendation was to remove SSP from PowerVC then adding back should help. (Below steps will not delete data from SSP, the volumes and all data in SSP will remain there, these will be removed from PowerVC only.)


1.Backup PowerVC
  - powervc-backup: https://www.ibm.com/support/knowledgecenter/en/SSXK2N_1.4.3/com.ibm.powervc.standard.help.doc/powervc_backup_data_hmc.html

2.in PowerVC UI record details (print screen)
  - each network: name,vlan id, subnet mask, dns,ip range, SEA mappings , SR-IOV mappings
  - each host: display name, management server (hmc or novalink name), DRO options, Remote restart value
  - each image in SSP: name of the image, OS type, version, details of each VOLUME in that image: volume details, wwn, storage provider name, storage id name etc.
  - export images to file from SSP:
     # cd /var/vio/SSP/SSP_Cluster_1/D_E_F_A_U_L_T_061310/IM
     # dd if=volume-Image_AIX-72_TMP_volume_1-61475a18-3726.fdb060e56143f9e7408e2ffe78de92ea of=/backup/powervc_image_dd/AIX-61_TMP bs=1M


Next steps will impact powervc management - NO impact on running VMs or systems

3.Unmanage all VMs and Hosts 
  - in Virtual Machines: unmanage each VM
  - in Hosts: "remove host" on each host which uses the SSP
  - confirm SSP no longer exists in storage view of PowerVC UI

4. Remove HMC
  - record details of HMC: hmc name and ip, user id, password
  - from hosts view of PowerVC click on the HMC connections tab, remove the HMC which had hosted the SSP

VMs, Hosts, Storage and Images have been removed from PowerVC, next steps will rebuild the environment.

5. Add back HMC and hosts
  - in Hosts view, HMC connections tab, click add HMC: enter hmc name, ip , user and password
  - in Hosts view, add host, leave hmc as connection type, select HMC addeded above
  - Select to add all hosts

6. Confirm SSP was added back to PowerVC
  - in storage view SSP should exist again

7. Create networks (if needed)
  - adding hosts will "discover" networks defined, any manually created networks may be recreated.

8. Recreating images (https://www.ibm.com/support/knowledgecenter/en/SSXK2N_1.4.3/com.ibm.powervc.standard.help.doc/powervc_manually_import_export_volumes_hmc.html)
  - PowerVC cannot "discover" the old images that existed, the volumes from those images remain in the SSP
     To import the volume from a file to SSP:
     # create a volume
     # cd /var/vio/SSP/SSP_Cluster_1/D_E_F_A_U_L_T_061310/VOL1
     dd if=/backup/powervc_image_dd/AIX-61_TMP of=volume-AIX-61_TMP-ci-4043e86a-8e35.5249022804b1cebcc0bbf569fd2b5bd3 bs=1M

9. Validate the environment

================================

POWERVC - API PYTHON


API calls can be done through Python as well.
(I did not test these, just collected, I used curl)

1. get token id (with any method)
With openstack command a token can be requested (make sure you use the same user later, as token is valid for that user)

openstack token issue

---------------------------------

2. get tenant id
(we will need the id of IBM Default Tenant)
(for v3 tenant and project are interchangeable)

IBM default tenant: tenant_id="3dd716cc52c617bface86421017afd1d"

#!/usr/bin/python

import httplib
import json
import os
import sys

def main():
    token = raw_input("Please enter PowerVC token : ")
    print "PowerVC token used = "+token

    conn = httplib.HTTPSConnection('localhost')
    headers = {"X-Auth-Token":token, "Content-type":"application/json"}
    body = ""

    conn.request("GET", "/powervc/openstack/identity/v3/projects", body, headers)
    response = conn.getresponse()
    raw_response = response.read()
    conn.close()
    json_data = json.loads(raw_response)
    print json.dumps(json_data, indent=4, sort_keys=True)

if __name__ == "__main__":
    main()


---------------------------------

3. get VMs and id

#!/usr/bin/python

import httplib
import json
import os
import sys

def main():
    token = raw_input("Please enter PowerVC token : ")
    print "PowerVC token used = "+token

    tenant_id = raw_input("Please enter PowerVC Tenant ID : ")
    print "Tenant ID = "+tenant_id

    conn = httplib.HTTPSConnection('localhost')
    headers = {"X-Auth-Token":token, "Content-type":"application/json"}
    body = ""

    conn.request("GET", "/powervc/openstack/compute/v2/"+tenant_id+"/servers", body, headers)
    response = conn.getresponse()
    raw_response = response.read()
    conn.close()
    json_data = json.loads(raw_response)
    print json.dumps(json_data, indent=4, sort_keys=True)

if __name__ == "__main__":
    main()


---------------------------------

List details of all servers:
"GET", "/powervc/openstack/compute/v2/"+tenant_id+"/servers/detail"

List networks details:
"GET", "/powervc/openstack/network/v2.0/networks",

We will get Unix network id: d93e1f57-958e-47de-b94e-f90ce5277fd9

List details only of 1 specified network:
"GET", "/powervc/openstack/network/v2.0/networks/d93e1f57-958e-47de-b94e-f90ce5277fd9"



POWERVC - BASICS

IBM POWERVC (Power Virtualization Center)

PowerVC is a cloud management tool/application from IBM which can be installed on a Linux server. With the help of the GUI we can manage the virtualization of Power Systems  (stop/start LPARs, create/delete/migrate LPARS, add storage to them…) It is based on the OpenStack initiative, which is an opensource cloud management project without any hardware dependency.  PowerVC is using the components of OpenStack.

When a Power server is controlled by PowerVC, it can be managed:
- By the graphical user interface (GUI)
- By scripts containing the IBM PowerVC REST APIs
- By higher-level tools that call IBM PowerVC by using standard OpenStack API

In PowerVC these terms are used:
Host: This is a Power server (same as in HMC the Managed System)
VM: Virtual Machine, which is running on a Host (same as an LPAR)
Image: This is a copy of a VM which can be used for fututre VM creations (it is a basically a disk copy of rootvg)
Volume: This is a disk or LUN
Deploy: When we create a new VM from an Image, it is called Deploying a new VM

---------------------------------------------------------------

NOVALINK

Using PowerVC, we have 2 options to manage Power servers. It can go through an HMC or a NovaLink LPAR. If we choose Novalink, a special partition on each Power server is needed. This will do the same functions as an HMC. (A combined solution is also possible, where both HMC and NovaLink exist together.)


The NovaLink architecture enables OpenStack to work with PowerVM (and PowerVC) by providing a direct connection to the Power server (rather than communicating through an HMC).  In an existing HMC-managed environment, PowerVC can manage up to 30 hosts and up to 3000 VMs. In a NovaLink based environment, PowerVC can manage up to 200 hosts and 5000 VMs. It is possible to use PowerVC to manage PowerVM NovaLink systems while still managing HMC managed systems as well.

NovaLink is enabled via a software package that runs in a Linux VM on a POWER8 host. NovaLink provides a consistent interface (with other supported Hypervisors such as KVM), so  OpenStack services can communicate with the LPARs consistently through the NovaLink partition. 

---------------------------------------------------------------

PowerVC and Openstack

PowerVC is built on Openstack, so the main OpenStack functions are built into PowerVC as well. These functions are:
- Image management (in OpenStack it is called "Glance")
- Compute (VM) management (in Openstack it is called "Nova")
- Network management (in OpenStack it is called "Neutron")
- Storage management (in OpenStack it is called "Cinder")



---------------------------------------------------------------

Deploying Virtual Machines (Host Group - Placement Policy)

In order to use PowerVC and to create new VMs, we need Images, Hosts, Networks and Storage space. A new LPAR is created from an Image, and during creation we need to choose which Power Server (Host) and which Network to use.

Power servers are called "Hosts" in PowerVC. After adding several Hosts to PowerVC, we can group these Hosts by creating "Host Groups". Each Host Group has a Placement Policy, which controls where (on which host) our new VMs are created.



For example if we choose the policy "Memory Utilization Balanced", our new VM will be deployed on that host where the memory utilization is the lowest. Every host must be in a host group and during migration VMs are kept within the host group. Out of the box, PowerVC comes with a “default” host group (a special group that can’t be deleted), that will house any host that is registered with PowerVC but not added to a specific host group.

Placement policies:
- Striping: It distributes VMs evenly across all hosts. (CPU/RAM/Storage/Network)
- Packing: It places VMs on a single host, which contains the most VMs (until its resources are fully used )
- CPU utilization balance: It places VMs on the host with the lowest CPU utilization in the host group.
- CPU allocation balance: It places VMs on the host with the lowest percentage of its CPU that is allocated to VMs.
- Memory utilization balanced: It places virtual machines on the host that has the lowest memory utilization in the host group
- Memory allocation balance: It places VMs on the host with the lowest percentage of its memory that is allocated post-deployment or after relocation

When a new host is added to a host group and the placement policy is set to striping mode, new VMs are deployed on the new host until the resource usage of this host is about the same as on the previously installed hosts (until it catches up with the existing hosts). 

The placement policies are predefined, it is not possible to create new policies, and if during VM deployment we choose a specific host (and not a Host group), then the placement policy is ignored for that VM. 

(Some tips from the Redbook: Use the striping policy rather than the packing policy. Limit the number of concurrent deployments to match the number of hosts.)

---------------------------------------------------------------

Collocation rules 

While Placement Policies are related to Hosts (which Host should be used for VM creation), Collocation Rules are making relationships between VMs (they are telling which VM should or should not run together with other VMs on the same Host). A collocation rule has also a policy, which can be either “affinity” or “anti-affinity”.  An affinity rule means that VMs in the collocation rule must be running on the same host (“best friends”) and an anti-affinity rule means that the VMs need to be running on different hosts (“worst enemies”). PowerVC is following these rules when performing live migration, remote restart or host evacuation operations (any mobility operation).  Automation becomes much simpler as we don't need to keep these rules in mind.

You can add a VM to a collocation rule only after deployment (doing this at deployment time is not possible). Collocation rules can be created in the "Configuration" menu under "Collocation Rules".



It is possible that a user starts a mobility operation outside of PowerVC (e.g., directly on the HMC), so the VM could be moved to a host that causes a violation of the collocation rule. In such a case, the policy state will be displayed as “violated” in PowerVC and serve as a visual indicator to the user that some remedial action is needed.

It is not possible to migrate or remote restart a VM that is a member of an “affinity” collocation rule. This restriction exists because there would be a period of time in which the VM is not on the same host, and it would violate the collocation rule.  If a mobility operation is needed on a VM in an “affinity” collocation rule, we need to remove it from the rule, perform the mobility operation and then re-add it to the rule.

---------------------------------------------------------------

Templates

Rather than defining all characteristics for each VM (CPU/RAM…) or each storage unit that must be created, we can use a template that was previously defined.

Three types of templates are available:
- Compute templates: These templates are used to define processing units and memory that are needed by a partition. 
- Deploy templates: These templates are used to allow users to quickly deploy an image. (more details below)
- Storage templates: These templates are used to define storage settings, such as a specific volume type, storage pool, and storage provider. 





Deploy templates:
A deploy template includes everything necessary to create quickly a VM. It includes:
- the deployment target (a Host group or a specific Host), Storage Connectivity Group and any other policies
- compute template (needed CPU, RAM configuration)
- which image to use during deployment
- network (VLAN) needed for the new VM
- any other scripts which will be called during first boot (this section is handled by cloud init)



A deploy template basically is just a bunch of information which will be needed for the creation of the new VM. Comparing to an image, deploy templates are not using storage space. (Images are using storage space. For example an AIX image can be on a 100GB LUN), so creating new images will take up more and more space on the storage. But creating new deploy templates will not use more storage space.)  

Creating Deploy Templates:
1. From the Images window, select the image that you want to use to create a deploy template and click Create Template from Image.
2. Fill out the information in the window that opens, then click Create Deploy Template.
3. The deploy template is now listed on the Deploy Templates tab of the Images window.


---------------------------------------------------------------

STORAGE:

Storage provider: Any system that provides storage volumes. (SVC, EMC...or SSP). PowerVC may call as storage controllers.
Fabric: The name for a collection of SAN switches
Storage pool: A storage resource (managed by storage providers) in which volumes are created. PowerVC discovers them (can't create one). 
Shared storage pool: PowerVM feature, which is created on VIOS before PowerVC can create volumes on SSP. (PowerVC cannot modify it.) 
Volume: This is a disk or a LUN. It is created from the storage pools and presented as virtual disks to the partitions.

VMs can access their storage by using  vSCSI, NPIV or an SSP (which will create vSCSI luns).


Storage templates:
Storage templates are used to speed up the creation of a disk. A storage template defines several properties of the disk (thin, io group, mirroring...). Disk size is not part of the template. When you register a storage provider, a default storage template is created for that provider. After a disk is created and uses a template, you cannot modify the template settings.




Storage connectivity groups
In short, it refers to a set of VIOSs with access to the same storage controllers. When a VM is created, PowerVC needs to identify which host has connectivity to the requested storage. Also, when a VM is migrated, PowerVC must ensure that the target host also provides connectivity to the volumes of the VM. The purpose of a storage connectivity group is to define settings that control how volumes are attached to VMs, including the connectivity type for boot and data volumes, physical FC port restrictions, fabrics, and redundancy requirements for VIOSs, ports, and fabrics. A storage connectivity group contains a set of VIOSs that are allowed to participate in volume connectivity.

Custom storage connectivity groups provide flexibility when different policies are needed for different types of VMs. For example, a storage connectivity group is needed to use VIOS_1 and VIOS_2 for production VMs and another storage connectivity group is needed for VIOS_3 for development VMs. Many other connectivity policies are available with storage connectivity groups.

When a VM is deployed with PowerVC, a storage connectivity group must be specified. The VM is associated with that storage connectivity group during the VM's existence. A VM can be deployed only on Power Systems hosts that satisfy the storage connectivity group settings. The VM can be migrated only within its associated storage connectivity group and host group.

The default storage connectivity group for NPIV connectivity, vSCSI connectivity and for SSP is created when PowerVC recognizes that the needed resource is needed for the management. After you add the storage providers and define the storage templates, you can create storage volumes.

Only data volumes must be created manually. Boot volumes are handled by PowerVC automatically. When you deploy a partition, IBM PowerVC automatically creates the boot volumes and data volumes that are included in the images.


Shared storage pool
SSPs are supported on hosts that are managed either by HMC or NovaLink. The SSP is configured manually, without PowerVC (creation of a cluster on VIO servers, adding disks to the pool). After that PowerVC will discover the SSP when it discovers the VIOSs. When a VM is created PowerVC will create logical units (LUs) in the SSP, then PowerVC instructs the VIOS to map these LUs to the VM (VIO client partition) as a vSCSI device. 




---------------------------------------------------------------

NETWORK

When we set up PowerVC for use, it is a good habit to create all networks that will be needed for future VM creation. (These VLANs need to be added on the switch ports that are used by the SEA).

PowerVC requires that the SEAs are created before it starts to manage the systems. If you are using SEA in sharing/auto mode with VLAN tagging, create the SEA without any VLANs that are assigned on the Virtual Ethernet Adapters. PowerVC adds or removes these VLANs on the SEAs when necessary (at VM deletion and creation).

 For example:
- If you deploy a VM on a new network, PowerVC adds the VLAN on the SEA.
- If you delete the last VM of a specific network (on a host), the VLAN is automatically deleted.
- If the VLAN is the last VLAN that was defined on the Virtual Ethernet Adapter, this VEA is removed from the SEA.

When a network is created in PowerVC, a SEA is automatically chosen from each registered host. If the VLAN does not exist yet on the SEA, PowerVC deploys that VLAN to the SEA. To manage PowerVM, PowerVC requires that at least one SEA is defined on the host. PowerVC supports the use of virtual switches in the system. These are good to separate a single VLAN across multiple distinct physical networks. (To split a single VLAN across multiple SEAs, break those SEAs into separate virtual switches.)

In environments with dual VIOSs, the secondary SEA is not shown except as an attribute of the primary SEA. If VLANs are added manually to SEA after the host is managed by PowerVC, the new VLAN is not automatically discovered by PowerVC. To discover a newly added VLAN, run the "Verify Environment" function.

PowerVC supports Dynamic Host Configuration Protocol (DHCP) or static IP address assignment. When DHCP is used, PowerVC is not aware of the IP addresses of the VMs that it manages. PowerVC also supports IP addresses by using hardcoded (/etc/hosts) or Domain Name Server (DNS)-based host name resolution.

Since Version 1.2.2, PowerVC can dynamically add a network interface controller (NIC) to a VM or remove a NIC from a VM. PowerVC does not set the IP address for new network interfaces that are created after the machine deployment. Any removal of a NIC results in freeing the IP address that was set on it.

---------------------------------------------------------------

PROJECTS

A project (sometimes called as a tenant) is a unit of ownership. The "ibm-default" project is created during installation, but PowerVC supports additional projects for resource segregation. By creating several projects we can separate virtual machines, volumes, and images from others. (For example a specific virt. machine can be seen only in one project, and it is not possible to see that in another project.) Other components of PowerVC, such as storage connectivity groups and compute templates do not belong to a specific project (these are generally available in each project). Only users with a role assignment can work with the resources of a specific project.

After creating a project, you are automatically assigned the admin role for that project. This allows you to assign additional roles to users in that project.

Role assignments are specific to a project. For example, a user could have the vm_manager and storage_manager roles in one project and only the viewer role in another project. Users can only log in to one project at a time. If they have a role on multiple projects, they can switch to other projects. When users log in to a project they will only see resources, messages etc. that belong to that project. They will not see resources that belong to other projects.

OpenStack does not support moving resources from one project to another. You can move volumes and virtual machines by unmanaging them and then remanaging them in the new project. All resources within a project must be deleted or unmanaged before the project can be deleted. The ibm-default project cannot be deleted.

openstack project create   create and manage projects
openstack role add ...     assign roles to users in a project

------------------------------------------

Environment checker:

This is a single interface to confirm that resources (Compute, Storage, Network etc.) registered in PowerVC meet the configuration and hardware level requirements.

The environment checker tool verifies these (and more):
- Management server has the required resources in terms of memory, disk space etc.
- Hosts and storage are the correct machine type and model.
- The allowed number of hosts is not exceeded.
- The correct level of Virtual I/O Server is installed on your hosts.
- The Virtual I/O Server is configured correctly on all of your hosts.
- Storage and SAN switches are configured correctly.

---------------------------------------------------------------

Commands:

powervc-diag          collects diagnostic data from PowerVC
powervc-log          enables or disables debug log level
powervc-log-management enables to view and modify the settings for log management
powervc-register registers an OpenStack supported storage provider or fabric.
powervc-services stop, start, PowerVC services and checks status
  stop                   stops PowerVC (all services)
  start                  starts PowerVC (all services)
  status                 show status of all services

powervc-config           it has many subcommads to configure PowerVC
  purge                  removes all events that are stored in the Panko database
  general ifconfig       change the host name or IP address of the PowerVC server
  storage                storage related configurations
  compute                configure many different options (like mover service partition IP, VLAN related configs on VMs etc.)
  web inactivity-timeout configure idle timeout in UI before user is prompted and logged out (0 or less disables the timer)
  reauth-warn-time       user asked for pw before token expires, inputting the pw obtains a new token. (0 or less disables timer)

powervc-image            image related configs
  config                 displays or changes the command configuration properties
  import                 imports an uncompressesed deployable image from OVA to PowerVC
  export                 exports a deployable image from PowerVC to a local OVA
  list                   lists the deployable images managed by PowerVC

Switching to LDAP and switching back:
powervc-config identity repository                            <--show if OS or LDAP authentication is actually in use
powervc-config identity repository --user root --type os      <--switch back to OS authentication (old users are kept)
powervc-config identity repository -t ldap …                  <--switching to LDAP authentication

Enabling debug:
powervc-config general debug                                  <--checking each service if debug is enabled or not for that service
powervc-config identity debug --enable --restart              <--enabling debug for identity service
powervc-config identity debug --disable --restart             <--disabling debug for identity service

powervc-backup --targetdir /powervc/backup                    <--creating a backup
powervc-restore --targetdir /powervc/backup/<backup dir>/     <--restoring a powervc backup

/opt/ibm/powervc/version.properties                           <--conatins version infor and other properties of PowerVC
https://ip_address/powervc/version                            <--gets the current version of PowerVC


---------------------------------------------------------------

PowerVC backup

1. mount a remote nfs share, where backup will be saved
[root@powervc ~]# mount nim01:/repository/BACKUP /mnt
mount.nfs: Remote I/O error

Remote I/O error can happen, because PowerVC is running on Linux (Red Hat) and it tries NFS 4 by default which is not configured at AIX side, choose NFS 3 during mount:
[root@powervc ~]# mount -o vers=3 nim01:/repository/BACKUP /mnt

2. start the backup, which takes about 5 mins and during that time Web interface is not working
[root@powervc ~]# powervc-backup --targetdir /mnt
Continuing with this operation will stop all PowerVC services.  Do you want to continue? (y/N):y
Stopping PowerVC services...
Backing up the databases and data files...
Database and file backup completed. Backup data is in archive /mnt/20180622105847651966/powervc_backup.tar.gz
Starting PowerVC httpd services...
Starting PowerVC bumblebee services...
Starting PowerVC services...
PowerVC backup completed successfully.

POWERVC - API CURL


POWERVC API

API (Application Programming Interface) is an interface or in other words a sort of "software" (combinations of protocols, subroutines...) which receives requests and sends responses to remote servers  and applications. For example a weather application on a mobile phone sends a request regarding the temperature, and the API on the remote server receives this request and sends back a response with the current temperature. APIs are very helpful for developers, as programs can use those automatically and also through the internet as HTTP requests. Some examples for these HTTP calls could be GET, PUT, POST, or DELETE.

PowerVC is built on OpenStack, which is an open-source cloud computing platform. Openstack provides an API, that can be used for writing softwares that manages the cloud. (create servers, stop/start servers, create images etc.) To accomplish these tasks, the software needs to communicate with the OpenStack API. PowerVC in the background also uses these Openstack APIs. If we want to build a new solution on top of PowerVC, we have these options:
- Supported OpenStack APIs - APIs provided by OpenStack and can be used with PowerVC without any modifications.
- Extended OpenStack APIs - APIs provided by OpenStack, but their functions are extended by PowerVC.
- PowerVC APIs - These APIs do not exist in OpenStack and are exclusive to PowerVC.


APIs are available in two formats:
(Preferred by IBM)       https://<ip-hostname>:<service-port>/...
(this one also works)    https://<ip-hostname>/powervc/openstack/<service>/...

https ://<POWERVC>:8774/v2/<TENANTID>/servers
https ://<POWERV>/powervc/openstack/compute/v2/<TENANTID>/servers



All PowerVC ports:
https://www.ibm.com/support/knowledgecenter/en/SSXK2N_1.3.2/com.ibm.powervc.standard.help.doc/powervc_planning_security_firewall_hmc.html

Regarding version numbers /opt/ibm/powervc/powervcrc file can give some idea:
export OS_IDENTITY_API_VERSION=3
export OS_COMPUTE_API_VERSION=2.46
export OS_NETWORK_API_VERSION=2.0
export OS_IMAGE_API_VERSION=2
export OS_VOLUME_API_VERSION=2

--------------------------------------------------------

API CALLS WITH CURL
(for scripting)

One possibility to use API calls is by using the program "curl" from a remote AIX/LINUX machine. curl can send the needed HTTP request to the PowerVC server and if we follow some syntax, we can achieve our task in one line (comparing to python where we need to use small scripts.)

To get any info from PowerVC through API calls (like CPU/RAM settings or stop/start LPAR), we need to go through several steps. First we need to authenticate ourselves on the PowerVC server with a user and password to get a token, and we can use this token for later calls to achieve what we want. (By default a token is valid for 6 hours.)

As writing an API call can get long and complex, in below examples I will use variables (with CAPITAL letters), to make these API calls easier to read.

for example:
POWERVC='<FQDN name of powervc server>'
(it may needed to do export: export POWERVC='<server name>')

During the below examples, I used API calls on an AIX where python with json.tool was installed.
(On linux the greps/awk/sed may not work, so remove those parts from the command and re-write them.)

SHOW DETAILS OF AN LPAR (VM) THROUGH API:

the steps in short:
1. put user and password in json format (either a variable or a file can be used)
2. get a token from PowerVC
3. get tenant id
4. get the VM id (which is the id of the VM in the openstack)
5. show all details of the specified VM


1.  AUTH_JSON

First we need to authenticate with a user and password, and this needs to be done in JSON format. We have 2 possibilities:
 - create a variable (AUTH_JSON) with the needed details (I prefer this one)
- or create a file (auth.json) with the needed details

Variable:
AUTH_JSON='{"auth":{"scope":{"project":{"domain":{"name":"Default"},"name":"ibm-default"}},"identity":{"password":{"user":{"domain":{"name":"Default"},"password":"abcd1234","name":"root"}},"methods":["password"]}}}'

(In the above and below example I used to "root" user with password  "abcd1234", and the project I used is the "ibm-default". The project "ibm-default" exist by default in PowerVC.)

------------------------------------------------------------------
or if we want we can use an auth.json file :
# cat auth.json
{
        "auth": {
                "scope": {
                        "project": {
                                "domain": {
                                        "name": "Default"
                                },
                                "name": "ibm-default"
                        }
                },
                "identity": {
                        "password": {
                                "user": {
                                        "domain": {
                                                "name": "Default"
                                        },
                                        "password": "abcd1234",
                                        "name": "root"
                                }
                        },
                        "methods": [
                                "password"
                        ]
                }
        }
}
------------------------------------------------------------------

2.  GET TOKEN_ID:

We need to request a token, and we will use this token id for later API calls.
($POWERVC AND $AUTH_JSON have been already set earlier.)

TOKEN_ID=`curl -1 -k -s -i -X POST https://$POWERVC:5000/v3/auth/tokens -H "Accept: application/json" -H "Content-Type: application/json" -d $AUTH_JSON | grep X-Subject-Token | cut -d ' ' -f2 | cat -v | sed 's/\^M//'`

the curl options used above:
-1: Use TLSv1.0 or greater
-k : “insecure” mode. Allow insecure server connections when using SSL.
-s: silent mode. Does not show progress bar
-i : show the http headers of the answer. Useful for debugging, but we need this her as the token id will be in the header
-X <GET/PUT/POST> : API requests
-H: http headers added to the request
      Accept: specifies the expected response content format
      Content-Type: specifies the request content format

Token can be found in the header in the line X-Subject-Token (that is why we need -i and we do grep and cut), and ….
…unfortunately curl puts a "^M" character at the end of the output (which echo did not show) so we needed to show this character first with "cat -v" and after remove it with sed.

------------------------------------------------------------------
….. if auth.json file was used:
curl -1 -k -i -X POST https://$POWERVC:5000/v3/auth/tokens -H "Accept: application/json" -H "Content-Type: application/json" -d @auth.json | grep X-Subject-Token | cut -d ' ' -f2
------------------------------------------------------------------


3.  GET TENANT ID:
(get the id of the IBM Default project)

Tenant and Project are the same thing (Openstack is using the term "Tenant" and PowerVC the term "Project".)

TENANT_ID=`curl -1 -k -s -X GET https://$POWERVC:5000/v3/projects -H "X-Auth-Token:$TOKEN_ID" | python -m json.tool | grep -p{ "IBM Default Tenant"| grep -w id | awk -F'"' '{print $4}'`

(originally I used this: curl -1 -k -i -X GET https://$POWERVC:5000/v3/projects -H "X-Auth-Token:$TOKEN_ID")

some comments regarding curl:
-i was not used here, as we don’t need the header (otherwise json.tool could not parse output)
python -m json.tool : that is a tool which comes with python, which formats output to look pretty  (in more lines). (Without this tenant id is there, but in a 1 long line, and hard to grep for that)


4. GET VM_ID

First I put the LPAR name (VM_NAME) in a variable, what we can use in grep, to search for the ID of the needed VM:
VM_NAME=`lsattr -El inet0 -a hostname | awk '{print $2}'`

VM_ID=`curl -1 -k -s -X GET https://$POWERVC:8774/v2/$TENANT_ID/servers -H "Content-Type: application/json" -H "X-Auth-Token:$TOKEN_ID" | python -m json.tool | egrep 'name|id' | sed -n "N;/$VM_NAME/p;"|head -1 |awk -F '"' '{print $4}'`


5. GET VM DETAILS:
Showing all the details of a specific VM ($VM_ID is used), like CPU/RAM settings…..

curl -1 -k -s -X GET https://$POWERVC:8774/v2/$TENANT_ID/servers/$VM_ID -H "Content-Type: application/json" -H "X-Auth-Token:$TOKEN_ID" | python -m json.tool


--------------------------------------------------------


CREATE A VM THROUGH API

1. assign variables to PowerVC server and also the the name of the new VM
2. auth_json
3. token id (which will authorize us to do build/stop/start/delete things).
4. get the id of the tenant
5. Network id (VLAN) of the new server
6. image id which will be used to create VM
7. flavor id which is needed to create VM (it is the compute template which contains CPU and RAM settings for the new VM)
8. adding all these variables to an API_BUILD variable (to have a simpler syntax)
9. with a POST request create a VM


1. VARIABLES:
POWERVC='<fqdn of powervc>'
VM_NEW='<new vm name>'


2. AUTH_JSON
AUTH_JSON='{"auth":{"scope":{"project":{"domain":{"name":"Default"},"name":"deploy"}},"identity":{"password":{"user":{"domain":{"name":"Default"},"password":"abcd1234","name":"api_user"}},"methods":["password"]}}}'


 3. TOKEN ID
TOKEN_ID=`curl -1 -k -s -i -X POST https://$POWERVC:5000/v3/auth/tokens -H "Accept: application/json" -H "Content-Type: application/json" -d $AUTH_JSON | grep X-Subject-Token | cut -d ' ' -f2 | cat -v | sed 's/\^M//'`


4. TENANT ID
TENANT_ID=`curl -1 -k -s -X GET https://$POWERVC:5000/v3/projects -H "X-Auth-Token:$TOKEN_ID" | python -m json.tool | grep -p{ "IBM Default Tenant"| grep -w id | awk -F'"' '{print $4}'`


5. NET_ID (we grep for "prod_vlan", this vlan we created earlier in PowerVC for prod. VMs)
NET_ID=`curl -1 -k -s -X GET https://$POWERVC:9696/v2.0/networks -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN_ID" | python -m json.tool | grep -p{ "prod_vlan" | grep -w id | awk -F'"' '{print $4}'`


6. IMAGE_ID (we grep for AIX-71, this image was created earlier in PowerVC)
IMAGE_ID=`curl -1 -k -s -X GET https://$POWERVC:9292/v2/images -H "Content-Type: application/json" -H "X-Auth-Token:$TOKEN_ID" |  python -m json.tool | grep -p{  "AIX-71" | grep -w id | awk -F'"' '{print $4}'`


7. FLAVOR_ID (this is the name of the Compute Template we created in PowerVC, here we grep for Prod_VM)
FLAVOR_ID=`curl -1 -k -s -X GET https://$POWERVC:8774/v2/flavors -H "Content-Type: application/json" -H "X-Auth-Token:$TOKEN_ID" | python -m json.tool | egrep 'name|id' | sed -n "N;/Prod_VM/p;"|head -1 |awk -F '"' '{print $4}'`


8. API_BUILD (assigning all the above variables in a correct API syntax to this variable, to have a simpler syntax)
API_BUILD="{\"server\":{\"flavorRef\":\"$FLAVOR_ID\",\"name\":\"$VM_NEW\",\"imageRef\":\"$IMAGE_ID\",\"networks\":[{\"uuid\":\"$NET_ID\"}]}}"


9. Creating a VM: (this is the step where the VM will be created in PowerVC, with all the above parameters)
curl -1 -k -i -X POST https://$POWERVC:8774/v2/$TENANT_ID/servers -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN_ID" -d $API_BUILD

--------------------------------------------------------

Activation Input through API

If we want to use below Activation Input script in our API call, then the used small script should be encoded to BASE64 format, then we need to add it to a variable, which can be used in the API call:

This Activation Input can be edited in the Image Deployment window:




1. Convert that script to BASE64 format
https://www.base64encode.org/

2. Add it to a variable (it is called BASE_64)
BASE_64='IyEvdXNyL2Jpbi9zaAoKIyBTdXBwb3J0ZWQgTmV0d29yazogdW5peCwgbGFiCmV4cG9ydCBuZXR3b3JrPSd1bml4JwojIFN1cHBvcnRlZCBQdXBwZXQ6ICIiLCBkdCwgcngsIHBjaQpleHBvcnQgcHVwcGV0PSJwY2kiCgpzdGFydHNyYyAtcyBkaGNwY2QKc2xlZXAgNjA='

3. createa new API_BUILD variable (as in the above step 8)
API_BUILD="{\"server\":{\"flavorRef\":\"$FLAVOR_ID\",\"name\":\"$AIX_NAME\",\"imageRef\":\"$IMAGE_ID\",\"networks\":[{\"uuid\":\"$NET_ID\"}],\"user_data\":\"$BASE_64\"}}"

4. create the VM with POST request (the command is the same as in the above step 9)


--------------------------------------------------------

OTHER COMMANDS

LISTING (GET) COMMANDS:

GET VMs:
curl -k -H "X-Auth-Token:$TOKEN_ID" -X GET https://$POWERVC:8774/v2/$TENANT_ID/servers | python -mjson.tool

GET IMAGES:
curl -1 -k -s -X GET https://$POWERVC:9292/v2/images -H "Content-Type: application/json" -H "X-Auth-Token:$TOKEN_ID" |  python -m json.tool

GET NETWORKS:
curl -1 -k -s -X GET https://$POWERVC:9696/v2.0/networks -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN_ID" | python -m json.tool

GET FLAVORS (Compute Template):
curl -1 -k -s -X GET https://$POWERVC:8774/v2/flavors -H "Content-Type: application/json" -H "X-Auth-Token:$TOKEN_ID"

GET HMCS:
curl -1 -k -s -X GET https://$POWERVC:8774/v2/$TENANT_ID/ibm-hmcs -H "X-Auth-Token:$TOKEN_ID" | python -m json.tool
curl -1 -k -s -X GET https://$POWERVC:8774/v2/$TENANT_ID/ibm-hmcs/detail -H "X-Auth-Token:$TOKEN_ID" | python -m json.tool

GET MAN. SYS. (AND POWERVC):
curl -1 -k -s -X GET https://$POWERVC:8774/v2.1/$TENANT_ID/os-hosts -H "X-Auth-Token:$TOKEN_ID" | python -m json.tool


START/STOP VM:

curl -k -H "Content-Type: application/json" -H "X-Auth-Token:$TOKEN_ID" -X POST -d '{"os-start": null}' https://pvc/powervc/openstack/compute/v2/$TENANT_ID/servers/$VM_ID/action
curl -k -H "Content-Type: application/json" -H "X-Auth-Token:$TOKEN_ID" -X POST -d '{"os-stop": null}' https://pvc/powervc/openstack/compute/v2/$TENANT_ID/servers/$VM_ID/action


UPDATE (PUT) VM DETAILS:

CHANGING VM NAME (to new_aix_1234) in POWERVC:
API_NEW='{"server":{"name":"new_aix_1234"}}'

CHANGING VM NAME + ADDING IP ADDRESS TO A FIELD in POWERVC:
API_NEW='{"server":{"accessIPv4":"111.112.113.114","name":"new_aix_1234"}}'

curl -1 -k -s -X PUT https://$POWERVC:8774/v2/$TENANT_ID/servers/$VM_ID -H "Content-Type: application/json" -H "X-Auth-Token:$TOKEN_ID" -d $API_NEW | python -m json.tool


DELETE COMMANDS:

DELETE VM:
curl -1 -k -i -X DELETE https://$POWERVC:8774/v2/$TENANT_ID/servers/$VM_ID -H "Content-Type: application/json" -H "X-Auth-Token:$TOKEN_ID"



--------------------------------------------------------

Get a token from Openstack

There is an example at Openstack site how to get a token (I did not tested, looks tenant id already should be known)
https://docs.openstack.org/zaqar/pike/user/authentication_tokens.html


curl -X POST https://localhost:5000/v2.0/tokens -d '{"auth":{"passwordCredentials":{"username": "joecool", "password":"coolword"}, "tenantId":"5"}}' -H 'Content-type: application/json'

--------------------------------------------------------