POWERVC - NOVALINK


Novalink

Novalink is a sort of "replacement" of the HMC. In a usual installation all Openstack services (Neutron, Cinder, Nova etc.) were running on the PowerVC host. For example the Nova service required 1 process for each Managed System:

# ps -ef | grep [n]ova-compute
nova       627     1 14 Jan16 ?        06:24:30 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_10D5555.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_10D5555.log
nova       649     1 14 Jan16 ?        06:30:25 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_65E5555.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_65E5555.log
nova       664     1 17 Jan16 ?        07:49:27 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_1085555.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_1085555.log
nova       675     1 19 Jan16 ?        08:40:27 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_06D5555.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_06D5555.log
nova       687     1 18 Jan16 ?        08:15:57 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova-9117MMD_6575555.conf --log-file /var/log/nova/nova-compute.log --log-file /var/log/nova/nova-compute-9117MMD_6575555.log

Beside the extra load, all PowerVC actions had to go through the HMC. PowerVC and HMC were single point of contact for every action, and this could cause slowness in large environments . In 2016 IBM came up with a solution, that a special LPAR on each Managed System could do all those actions what usually an HMC would do. This special LPAR is called Novalink. So it means if this special LPAR is created on all Managed Systems, then PowerVC will stop querying the HMC and will query dierctly the Novalink LPARs, where additionally some Openstack services are alos running (Nova, Neutron, Ceilometer). It is a Linux LPAR (currently Ubuntu or RHEL) which has a CLI and an API.

---------------------------------------------------------------------

Novalink Install/Update

+--------------------------------+ Welcome +---------------------------------+
|                                                                            |
| Welcome to the PowerVM NovaLink Install wizard.                            |
|                                                                            |
| (*) Choose to perform an installation.                                     |
| This will perform an installation of the NovaLink partition, its core      |
| components and REST APIs, and all needed Virtual I/O servers.              |
|                                                                            |
| ( ) Choose to repair a system.                                             |
| This will repair the system by performing a rescue/repair of existing      |
| Virtual I/O servers and NovaLink partitions.                               |
| Choose this option if PowerVM is already installed but is corrupted        |
| or there is a failure.                                                     |
|                                                                            |
|                                                                            |
| <Next> <Cancel>                                                            |
|                                                                            |
+----------------------------------------------------------------------------+
<Tab>/<Alt-Tab> between elements | <Space> selects | <F12> next screen


Novalink is a standard LPAR whose I/O is provided by the VIOS (therefore no physical I/O is required) with a special permission bit to enable PowerVM Management authority. If you install the NovaLink environment on a new managed system, the NovaLink installer creates the NovaLink partition automatically. It creates the Linux and VIOS LPARs and installs the operating systems and the NovaLink software. It creates logical volumes from the VIOS rootvg for the NovaLink partition. (The VIOS installation files (extracted mksysb files from VIOS DVD iso) needs to be added to the Novalink installer manually: https://www.ibm.com/support/knowledgecenter/POWER8/p8eig/p8eig_creating_iso.htm)


If you install the NovaLink software on a system that is already managed by a HMC, use the HMC to create a Linux LPAR  and set the powervm_mgmt_capable flag to true. (the NovaLink partition must be granted the capability of PowerVM management)
$ lssyscfg -m p850 -r lpar --filter "lpar_ids=1"
name=novalink,lpar_id=1,lpar_env=aixlinux,state=Running,resource_config=1,os_version=Unknown,logical_serial_num=211FD2A1,default_profile=default,curr_profile=default,work_group_id=none,shared_proc_pool_util_auth=0,allow_perf_collection 0,power_ctrl_lpar_ids=none,boot_mode=norm,lpar_keylock=norm,auto_start=1,redundant_err_path_reporting=0,rmc_state=active,rmc_ipaddr=129.40.226.21,time_ref0,lpar_avail_priority=127,desired_lpar_proc_compat_mode=default,curr_lpar_proc_compat_mode=POWER8,suspend_capable=0,remote_restart_capable0,simplified_remote_restart_capable=0,sync_curr_profile=0,affinity_group_id=none,vtpm_enabled=0,powervm_mgmt_capable=0
$ chsyscfg -m seagull -r lpar -i lpar_id=1,powervm_mgmt_capable=1

powervm_mgmt_capable flag is valid for Linux partitions only:
0 - do not allow this partition to provide PowerVM management functions
1 - enable this partition to provide PowerVM management functions


PowerVM NovaLink by default installs Ubuntu, but also supports RHEL. The installer provides an option to install RHEL after the required setup or configuration of the system completes. For easier installation of PowerVM NovaLink on multiple servers, set up a netboot (bootp) server to install PowerVM NovaLink from a network.

Installation log files are in /var/log/pvm-install and the NovaLink installer creates an installation configuration file /var/log/pvm-install/novalink-install.cfg (which can be used if we need to restore Novalink partition). Updating PowerVM NovaLink is currently driven entirely through Ubuntu’s apt package system

---------------------------------------------------------------------

Novalink and HMC

NovaLink provides a direct connection to the PowerVM server rather than proxying through an HMC. For example a VM create request in PowerVC goes directly to NovaLink, which then communicates with PowerVM. This allows improved scalability (from 30 to 200+ servers), better performance, and better alignment with OpenStack.

Hosts can be managed by NovaLink only (without HMC), or can be co-managed (Novalink and HMC together). In this co-managed setup either NovaLink or the HMC is the master. Both of them have read-access to partition configuration, but only the master can make changes to the system. Typically NovaLink will be the co-management master, however if a task has to be done from the HMC (like firmware upgrade), we can explicitly request master authority to the HMC, perform the action, and then give back the authority to NovaLink.


HMC: saves the LPAR configuration in the FSP NVRAM also uses FSP lock mechanism and receives event from FSP/PHYP
NovaLink: receives events from PHYP, it is not aware of FSP, does not receive FSP events

In co-management mode there are no partition profiles. In OpenStack, the concept of a flavor is similar to profiles, and these are all managed by OpenStack, not the HMC or NovaLink. For example, you can activate a partition with the current configuration, but not with a profile.

To update the firmware on a system that is managed by only NovaLink, use the ldfware command on the service partition. If the system is co-managed by NovaLink and HMC, firmware updates can be performed only from the HMC. The HMC must be set to the master mode to update the firmware. After firmware update is finished master mode can be set back to Novalink. (The current operation has to be finished before the change completes, or force option is also possible.)

In HMC CLI:
$ chcomgmt -m <managed_system> -o setmaster -t norm              <--set HMC to be master on the specified Man. Sys.
$ chcomgmt -m <managed_system> -o relmaster                      <--set Novalink to be master again

In Novalink CLI:
$ pvmctl sys list -d master                                      <--list master (-d: display)
$ pvmctl <managed_system> set-master                             <--set Novalink to be master

---------------------------------------------------------------------

Novalink partition and services

NovaLink is not a part of the PowerVC, but the two technologies work closely together. If NovaLink is installed on a host, even if an HMC is connected to it, PowerVC must manage that host through the NovaLink partition. The Novalink LPAR (with the installed software packages) provides Openstack services and it can perform virtualization tasks in the PowerVM/Hypervisor layer. The following OS packages are providing these functions in NovaLink:
-ibmvmc-dkms: this is the device driver kernel module that allows NovaLink to talk to the Hypervisor
-pvm-core: this is the base novalink package. It primarily provides a shared library to the REST server.
-pvm-rest-server: this is the java webserver used to run the REST API service
-pvm-rest-app: this is the REST APP that provides all the REST APIs and communicates with pvm-core
-pypowervm: pypowervm library provides a Python-based API wrapper for interaction with the PowerVM API
-pvm-cli: this provides the python based CLI (pvmctl)

A meta package called pvm-novalink ensures dependencies between all these packages. When updating, just update pvm-novalink and it will handle the rest.

NovaLink contains two system services that should always be running:
- pvm-core
- pvm-rest

If you are not able to complete tasks on NovaLink, verify whether these services are running. Use the systemctl command to view the status of these services and to stop, start, and restart these services. (Generally restarting pvm-core will cause pvm-rest to also restart.)
# systemctl status pvm-core / pvm-rest
# systemctl stop pvm-core / pvm-rest
# systemctl start pvm-core / pvm-rest
# systemctl restart pvm-core / pvm-rest


With these installed packages NovaLink provides 2 main services: Openstack services and Novalink Core services:


OpenStack Services
- nova-powervm: Nova is the compute service of Openstack. This handles VM managements (creating VMs, add/remove CPU/RAM...)
- networking-powervm: this is the network service of OpenStack (Neutron). Provides functions to manage SEA, VLANs ...
- ceilometer-powervm: Ceilometer is the monitoring service of Openstack. Collects monitoring data for CPU, network, memory, and disk usage

These services are using the pypowervm library, which is a python based library that interacts with the PowerVM REST API.


NovaLink Core Services 
These services are communicating with the PHYP and the VIOS, these provide direct connection to the managed system.
- REST API: It is based on the API that is used by the HMC. It also provides a python-based software development kit.
- CLI: It provides shell interaction with PowerVM. It is based on python as well.

---------------------------------------------------------------------

RMC with PowerVM NovaLink

RMC connection between NovaLink and each LPAR is routed through a dedicated internal virtual switch (mandatory name is MGMTSWITCH) and the virtual network is using the PVID 4094.

It uses an IPv6 link, and VEPA mode has to be configured, so LPARs can NOT communicate directly to each other, network traffic will go out to the switch first. After it is configured correctly NovaLink and the client LPARs can communicate for DLPAR and mobility. The minimum RSCT version to use RMC with Novalink is 3.2.1.0. The management vswitch is required for LPARs deployed using PowerVC, however the HMC can continue using RMC through the existing mechanisms.

The LPARs are using virtual Ethernet adapters to connect to NovaLink through a virtual switch. The virtual switch is configured to communicate only with the trunk port. An LPAR can therefore use this virtual network only to connect with the NovaLink partition. LPARs can connect with partitions other than the NovaLink partition only if a separate network is configured for this purpose.

---------------------------------------------------------------------

Novalink CLI (pvmctl, viosvrcmd)

The NovaLink command-line interface (CLI) is provided by the Python based pvm-cli package. It uses the pvmctl and viosvrcmd commands for most operations. Execution of the pvmctl command is logged in the file /var/log/pvm/pvmctl.log and commands can only be executed by users who are in the pvm_admin group. The admin user (i.e. padmin) is added automatically to the group during installation.

pvmctl

It runs operations against an object: pvmctl OBJECT VERB

Supported OBJECT types:
ManagedSystem (sys)
LogicalPartition (lpar or vm)
VirtualIOServer (vios)
SharedStoragePool (ssp)
IOSlot (io)
LoadGroup (lgrp)
LogicalUnit (lu)
LogicalVolume (lv)
NetworkBridge (nbr or bridge)
PhysicalVolume (pv)
SharedEthernetAdapter (sea)
VirtualEthernetAdapter (vea or eth)
VirtualFibreChannelMapping (vfc or vfcmapping)
VirtualMediaRepository (vmr or repo)
VirtualNetwork (vnet or net)
VirtualOpticalMedia (vom or media)
VirtualSCSIMapping (scsi or scsimapping)
VirtualSwitch (vswitch or vsw)

Supported operations (VERB) example:
logicalpartition (vm,lpar) supported operations: create, delete, list, migrate, migrate-recover, migrate-stop, power-off, power-on, restart, update
IOSlot (io) supported operations: attach, detach, list

---------------------------------------------------------------------

pvmctl listing objects

$ pvmctl lpar list
Logical Partitions
+----------+----+----------+----------+----------+-------+-----+-----+
| Name     | ID | State    | Env      | Ref Code | Mem   | CPU | Ent |
+----------+----+----------+----------+----------+-------+-----+-----+
| novalink | 2  | running  | AIX/Lin> | Linux p> | 2560  | 2   | 0.5 |
| pvc      | 3  | running  | AIX/Lin> | Linux p> | 11264 | 2   | 1.0 |
| vm1      | 4  | not act> | AIX/Lin> | 00000000 | 1024  | 1   | 0.5 |
+----------+----+----------+----------+----------+-------+-----+-----+

$ pvmctl lpar list --object-id id=2
Logical Partitions
+----------+----+---------+-----------+---------------+------+-----+-----+
| Name     | ID | State   | Env       | Ref Code      | Mem  | CPU | Ent |
+----------+----+---------+-----------+---------------+------+-----+-----+
| novalink | 2  | running | AIX/Linux | Linux ppc64le | 2560 | 2   | 0.5 |
+----------+----+---------+-----------+---------------+------+-----+-----+

$ pvmctl lpar list -d name id state --where LogicalPartition.state=running
name=novalink,id=2,state=running
name=pvc,id=3,state=running

$ pvmctl lpar list -d name id state --where LogicalPartition.state!=running
name=vm1,id=4,state=not activated
name=vm2,id=5,state=not activated

---------------------------------------------------------------------

pvmctl creating objects:

creating an LPAR:
$ pvmctl lpar create --name vm1 --proc-unit .1 --sharing-mode uncapped --type AIX/Linux --mem 1024 --proc-type shared --proc 2
$ pvmctl lpar list
Logical Partitions
+-----------+----+-----------+-----------+-----------+------+-----+-----+
| Name      | ID | State     | Env       | Ref Code  | Mem  | CPU | Ent |
+-----------+----+-----------+-----------+-----------+------+-----+-----+
| novalink> | 1  | running   | AIX/Linux | Linux pp> | 2560 | 2   | 0.5 |
| vm1       | 4  | not acti> | AIX/Linux | 00000000  | 1024 | 2   | 0.1 |
+-----------+----+-----------+-----------+-----------+------+-----+-----+


creating a virtual ethernet adapter:
$ pvmctl vswitch list
Virtual Switches
+------------+----+------+---------------------+
| Name       | ID | Mode | VNets               |
+------------+----+------+---------------------+
| ETHERNET0  | 0  | Veb  | VLAN1-ETHERNET0     |
| MGMTSWITCH | 1  | Vepa | VLAN4094-MGMTSWITCH |
+------------+----+------+---------------------+

$ pvmctl vea create --slot 2 --pvid 1 --vswitch ETHERNET0 --parent-id name=vm1

$ pvmctl vea list
Virtual Ethernet Adapters
+------+------------+------+--------------+------+-------+--------------+
| PVID | VSwitch    | LPAR | MAC          | Slot | Trunk | Tagged VLANs |
+------+------------+------+--------------+------+-------+--------------+
| 1    | ETHERNET0  | 1    | 02224842CB34 | 3    | False |              |
| 1    | ETHERNET0  | 4    | 1A05229C5DAC | 2    | False |              |
| 1    | ETHERNET0  | 2    | 3E5EBB257C67 | 3    | True  |              |
| 1    | ETHERNET0  | 3    | 527A821777A7 | 3    | True  |              |
| 4094 | MGMTSWITCH | 1    | CE46F57C513F | 6    | True  |              |
| 4094 | MGMTSWITCH | 2    | 22397C1B880A | 6    | False |              |
| 4094 | MGMTSWITCH | 3    | 363100ED375B | 6    | False |              |
+------+------------+------+--------------+------+-------+--------------+

---------------------------------------------------------------------

pvmctl updating/deleting objects

Update the desired memory on vm1 to 2048 MB:
$ pvmctl lpar update –i name=vm1 –-set-fields PartitionMemoryConfiguration.desired=2048 
$ pvmctl lpar update –i id=2 –s PartitionMemoryConfiguration.desired=2048


Delete an LPAR:
$ pvmctl lpar delete -i name=vm4
[PVME01050010-0056] This task is only allowed when the partition is powered off.
$ pvmctl lpar power-off -i name=vm4
Powering off partition vm4, this may take a few minutes.
Partition vm4 power-off successful.
$ pvmctl lpar delete -i name=vm4

---------------------------------------------------------------------

Additional commands

$ pvmctl vios power-off -i name=vios1            <--shutdown VIOS
$ pvmctl lpar power-off –-restart name=vios1     <--restart LPAR

$ mkvterm –m sys_name –p vm1                     <--open a console

---------------------------------------------------------------------

viosvrcmd

viosvrcmd runs VIOS commands from Novalink LPAR on the specified VIO server. The underlying RMC is used to pass over the viosvrcmd command to the VIO server.

An example: 
Allocating a logical unit from an existing SSP on the VIOS at partition id 2. The allocated logical unit is then mapped to a virtual SCSI adapter in the target LPAR.

$ viosvrcmd --id 2 -c "lu -create -sp pool1 -lu vdisk_vm1 -size 20480"    <--create a Logical Unit on VIOS (vdisk_vm1)
Lu Name:vdisk_vm1
Lu Udid:955b26de3a4bd643b815b8383a51b718

$ pvmctl lu list
Logical Units
+-------+-----------+----------+------+------+-----------+--------+
| SSP   | Name      | Cap (GB) | Type | Thin | Clone     | In use |
+-------+-----------+----------+------+------+-----------+--------+
| pool1 | vdisk_vm1 | 20.0     | Disk | True | vdisk_vm1 | False |
+-------+-----------+----------+------+------+-----------+--------+

$ pvmctl scsi create --type lu --lpar name=vm1 --stor-id name=vdisk_vm1 --parent-id name=vios1

---------------------------------------------------------------------

Backups

PowerVM NovaLink automatically backs up hypervisor (LPAR configurations) and VIOS configuration data by using cron jobs. Backup files are stored in the /var/backups/pvm/SYSTEM_MTMS/ directory. VIOS configuration data is copied from the VIOS (/home/padmin/cfgbackups) to Novalink.

$ ls –lR /var/backups/pvm/8247-21L*03212E3CA
-rw-r----- 1 root pvm_admin 2401 Jun 1 00:15 system_daily_01.bak
-rw-r----- 1 root pvm_admin 2401 May 30 00:15 system_daily_30.bak
-rw-r----- 1 root pvm_admin 2401 May 31 00:15 system_daily_31.bak
-rw-r----- 1 root pvm_admin 2401 Jun 1 01:15 system_hourly_01.bak
-rw-r----- 1 root pvm_admin 2401 Jun 1 02:15 system_hourly_02.bak
-rw-r----- 1 root pvm_admin 4915 Jun 1 00:15 vios_2_daily_01.viosbr.tar.gz
-rw-r----- 1 root pvm_admin 4914 May 30 00:15 vios_2_daily_30.viosbr.tar.gz
-rw-r----- 1 root pvm_admin 4910 May 31 00:15 vios_2_daily_31.viosbr.tar.gz
-rw-r----- 1 root pvm_admin 4911 Jun 1 00:15 vios_3_daily_01.viosbr.tar.gz
-rw-r----- 1 root pvm_admin 4911 May 30 00:15 vios_3_daily_30.viosbr.tar.gz
-rw-r----- 1 root pvm_admin 4910 May 31 00:15 vios_3_daily_31.viosbr.tar.gz
-rw-r----- 1 root pvm_admin 4909 Jun 1 01:15 vios_3_hourly_01.viosbr.tar.gz
-rw-r----- 1 root pvm_admin 4909 Jun 1 02:15 vios_3_hourly_02.viosbr.tar.gz

The hypervisor (partition configuration) backup can be manually initiated by using the bkprofdata command:
$ sudo bkprofdata –m gannet –o backup
$ ls –l /etc/pvm
total 8
drwxr-xr-x 2 root root 4096 May 26 17:32 data
-rw-rw---- 1 root root 2401 Jun 2 17:05 profile.bak
$ cat /etc/pvm/profile.bak
FILE_VERSION = 0100
CONFIG_VERSION = 0000000000030003
TOD = 1464901557123
MTMS = 8247-21L*212E3CA
SERVICE_PARTITION_ID = 2
PARTITION_CONFIG =
lpar_id\=1,name\=novalink_212E3CA,lpar_env\=aixlinux,mem_mode\=ded,min_mem\=2048,desired_mem\=2560,max_mem\=16384,hpt_ratio\=6,mem_expansion\=0.00,min_procs\=1,desired_procs\=2,max_procs\=10,proc_mode\=shared,shared_proc_pool_id\=0,sharing_mode\=uncap,min_proc_units\=0.05,desired_proc_units\=0.50,max_proc_units\=10.00,uncap_weight\=128,allow_perf_collection\=0,work_group_id\=none,io_slots\=2101001B/none/0,"virtual_eth_adapters\=3/1/1//0/0/0/B2BBCA66F6F1/all/none,6/1/4094//1/0/1/EA08E1233F8A/all/none","virtual_scsi_adapters\=4/client/2/vios1/2/0,5/client/3/vios2/2/0",auto_start\=1,boot_mode\=norm,max_virtual_slots\=2000,lpar_avail_priority\=127,lpar_proc_compat_mode\=default
PARTITION_CONFIG =
lpar_id\=2,name\=vios1,lpar_env\=vioserver,mem_mode\=ded,min_mem\=1024,desired_mem\=4096,max_mem\=16384,hpt_ratio\=6,mem_expansion\=0.00,min_procs\=2,desired_procs\=2,max_procs\=64,proc_mode\=shared,shared_proc_pool_id\=0,sharing_mode\=uncap,min_proc_units\=0.10,desired_proc_units\=1.00,max_proc_units\=10.00,uncap_weight\ 255,allow_perf_collection\=0,work_group_id\=none,"io_slots\=21010013/none/0,21030015/none/0,2104001E/none/0","virtual_eth_adapters\=3/1/1//1/0/0/36BACB2677A6/all/none,6/1/4094//0/0/1/468CA1242EC8/all/none",virtual_scsi_adapters\=2/server/1/novalink_212E3CA/4/0,auto_start\=1,boot_mo
...
...


The VIOS configuration data backup can be manually initiated by using the viosvrcmd –id X –c “viosbr” command:
$ viosvrcmd –-id 2 –c “viosbr –backup –file /home/padmin/cfgbackups/vios_2_example.viosbr”
Backup of this node (gannet2.pbm.ihost.com.pbm.ihost.com) successful
$ viosvrcmd --id 2 -c "viosbr -view -file /home/padmin/cfgbackups/vios_2_example.viosrb.tar.gz"


$ viosvrcmd –-id X –c “backupios –cd /dev/cd0 –udf -accept”             <--creates bootable media
$ viosvrcmd –-id X –c “backupios –file /mnt [-mksysb]”                  <--for NIM backup on NFS (restore with installios (or mksysb)
$ viosvrcmd –-id X –c “backupios –file /mnt [-mksysb] [-nomedialib]”    <--exclude optical media

---------------------------------------------------------------------

No comments:

Post a Comment