IBM POWERVC (Power Virtualization Center)
PowerVC is a cloud management tool/application from IBM which can be installed on a Linux server. With the help of the GUI we can manage the virtualization of Power Systems (stop/start LPARs, create/delete/migrate LPARS, add storage to them…) It is based on the OpenStack initiative, which is an opensource cloud management project without any hardware dependency. PowerVC is using the components of OpenStack.
When a Power server is controlled by PowerVC, it can be managed:
- By the graphical user interface (GUI)
- By scripts containing the IBM PowerVC REST APIs
- By higher-level tools that call IBM PowerVC by using standard OpenStack API
In PowerVC these terms are used:
Host: This is a Power server (same as in HMC the Managed System)
VM: Virtual Machine, which is running on a Host (same as an LPAR)
Image: This is a copy of a VM which can be used for fututre VM creations (it is a basically a disk copy of rootvg)
Volume: This is a disk or LUN
Deploy: When we create a new VM from an Image, it is called Deploying a new VM
---------------------------------------------------------------
NOVALINK
Using PowerVC, we have 2 options to manage Power servers. It can go through an HMC or a NovaLink LPAR. If we choose Novalink, a special partition on each Power server is needed. This will do the same functions as an HMC. (A combined solution is also possible, where both HMC and NovaLink exist together.)
The NovaLink architecture enables OpenStack to work with PowerVM (and PowerVC) by providing a direct connection to the Power server (rather than communicating through an HMC). In an existing HMC-managed environment, PowerVC can manage up to 30 hosts and up to 3000 VMs. In a NovaLink based environment, PowerVC can manage up to 200 hosts and 5000 VMs. It is possible to use PowerVC to manage PowerVM NovaLink systems while still managing HMC managed systems as well.
NovaLink is enabled via a software package that runs in a Linux VM on a POWER8 host. NovaLink provides a consistent interface (with other supported Hypervisors such as KVM), so OpenStack services can communicate with the LPARs consistently through the NovaLink partition.
---------------------------------------------------------------
PowerVC and Openstack
PowerVC is built on Openstack, so the main OpenStack functions are built into PowerVC as well. These functions are:
- Image management (in OpenStack it is called "Glance")
- Compute (VM) management (in Openstack it is called "Nova")
- Network management (in OpenStack it is called "Neutron")
- Storage management (in OpenStack it is called "Cinder")
---------------------------------------------------------------
Deploying Virtual Machines (Host Group - Placement Policy)
In order to use PowerVC and to create new VMs, we need Images, Hosts, Networks and Storage space. A new LPAR is created from an Image, and during creation we need to choose which Power Server (Host) and which Network to use.
Power servers are called "Hosts" in PowerVC. After adding several Hosts to PowerVC, we can group these Hosts by creating "Host Groups". Each Host Group has a Placement Policy, which controls where (on which host) our new VMs are created.
For example if we choose the policy "Memory Utilization Balanced", our new VM will be deployed on that host where the memory utilization is the lowest. Every host must be in a host group and during migration VMs are kept within the host group. Out of the box, PowerVC comes with a “default” host group (a special group that can’t be deleted), that will house any host that is registered with PowerVC but not added to a specific host group.
For example if we choose the policy "Memory Utilization Balanced", our new VM will be deployed on that host where the memory utilization is the lowest. Every host must be in a host group and during migration VMs are kept within the host group. Out of the box, PowerVC comes with a “default” host group (a special group that can’t be deleted), that will house any host that is registered with PowerVC but not added to a specific host group.
Placement policies:
- Striping: It distributes VMs evenly across all hosts. (CPU/RAM/Storage/Network)
- Packing: It places VMs on a single host, which contains the most VMs (until its resources are fully used )
- CPU utilization balance: It places VMs on the host with the lowest CPU utilization in the host group.
- CPU allocation balance: It places VMs on the host with the lowest percentage of its CPU that is allocated to VMs.
- Memory utilization balanced: It places virtual machines on the host that has the lowest memory utilization in the host group
- Memory allocation balance: It places VMs on the host with the lowest percentage of its memory that is allocated post-deployment or after relocation
When a new host is added to a host group and the placement policy is set to striping mode, new VMs are deployed on the new host until the resource usage of this host is about the same as on the previously installed hosts (until it catches up with the existing hosts).
The placement policies are predefined, it is not possible to create new policies, and if during VM deployment we choose a specific host (and not a Host group), then the placement policy is ignored for that VM.
(Some tips from the Redbook: Use the striping policy rather than the packing policy. Limit the number of concurrent deployments to match the number of hosts.)
---------------------------------------------------------------
Collocation rules
While Placement Policies are related to Hosts (which Host should be used for VM creation), Collocation Rules are making relationships between VMs (they are telling which VM should or should not run together with other VMs on the same Host). A collocation rule has also a policy, which can be either “affinity” or “anti-affinity”. An affinity rule means that VMs in the collocation rule must be running on the same host (“best friends”) and an anti-affinity rule means that the VMs need to be running on different hosts (“worst enemies”). PowerVC is following these rules when performing live migration, remote restart or host evacuation operations (any mobility operation). Automation becomes much simpler as we don't need to keep these rules in mind.
You can add a VM to a collocation rule only after deployment (doing this at deployment time is not possible). Collocation rules can be created in the "Configuration" menu under "Collocation Rules".
You can add a VM to a collocation rule only after deployment (doing this at deployment time is not possible). Collocation rules can be created in the "Configuration" menu under "Collocation Rules".
It is possible that a user starts a mobility operation outside of PowerVC (e.g., directly on the HMC), so the VM could be moved to a host that causes a violation of the collocation rule. In such a case, the policy state will be displayed as “violated” in PowerVC and serve as a visual indicator to the user that some remedial action is needed.
It is not possible to migrate or remote restart a VM that is a member of an “affinity” collocation rule. This restriction exists because there would be a period of time in which the VM is not on the same host, and it would violate the collocation rule. If a mobility operation is needed on a VM in an “affinity” collocation rule, we need to remove it from the rule, perform the mobility operation and then re-add it to the rule.
---------------------------------------------------------------
Templates
Rather than defining all characteristics for each VM (CPU/RAM…) or each storage unit that must be created, we can use a template that was previously defined.
Three types of templates are available:
- Compute templates: These templates are used to define processing units and memory that are needed by a partition.
- Deploy templates: These templates are used to allow users to quickly deploy an image. (more details below)
- Storage templates: These templates are used to define storage settings, such as a specific volume type, storage pool, and storage provider.
Deploy templates:
A deploy template includes everything necessary to create quickly a VM. It includes:
- the deployment target (a Host group or a specific Host), Storage Connectivity Group and any other policies
- compute template (needed CPU, RAM configuration)
- which image to use during deployment
- network (VLAN) needed for the new VM
- any other scripts which will be called during first boot (this section is handled by cloud init)
A deploy template basically is just a bunch of information which will be needed for the creation of the new VM. Comparing to an image, deploy templates are not using storage space. (Images are using storage space. For example an AIX image can be on a 100GB LUN), so creating new images will take up more and more space on the storage. But creating new deploy templates will not use more storage space.)
Creating Deploy Templates:
1. From the Images window, select the image that you want to use to create a deploy template and click Create Template from Image.
2. Fill out the information in the window that opens, then click Create Deploy Template.
3. The deploy template is now listed on the Deploy Templates tab of the Images window.
---------------------------------------------------------------
STORAGE:
Storage provider: Any system that provides storage volumes. (SVC, EMC...or SSP). PowerVC may call as storage controllers.
Fabric: The name for a collection of SAN switches
Storage pool: A storage resource (managed by storage providers) in which volumes are created. PowerVC discovers them (can't create one).
Shared storage pool: PowerVM feature, which is created on VIOS before PowerVC can create volumes on SSP. (PowerVC cannot modify it.)
Volume: This is a disk or a LUN. It is created from the storage pools and presented as virtual disks to the partitions.
VMs can access their storage by using vSCSI, NPIV or an SSP (which will create vSCSI luns).
Storage templates:
Storage templates are used to speed up the creation of a disk. A storage template defines several properties of the disk (thin, io group, mirroring...). Disk size is not part of the template. When you register a storage provider, a default storage template is created for that provider. After a disk is created and uses a template, you cannot modify the template settings.
Storage connectivity groups
In short, it refers to a set of VIOSs with access to the same storage controllers. When a VM is created, PowerVC needs to identify which host has connectivity to the requested storage. Also, when a VM is migrated, PowerVC must ensure that the target host also provides connectivity to the volumes of the VM. The purpose of a storage connectivity group is to define settings that control how volumes are attached to VMs, including the connectivity type for boot and data volumes, physical FC port restrictions, fabrics, and redundancy requirements for VIOSs, ports, and fabrics. A storage connectivity group contains a set of VIOSs that are allowed to participate in volume connectivity.
Custom storage connectivity groups provide flexibility when different policies are needed for different types of VMs. For example, a storage connectivity group is needed to use VIOS_1 and VIOS_2 for production VMs and another storage connectivity group is needed for VIOS_3 for development VMs. Many other connectivity policies are available with storage connectivity groups.
When a VM is deployed with PowerVC, a storage connectivity group must be specified. The VM is associated with that storage connectivity group during the VM's existence. A VM can be deployed only on Power Systems hosts that satisfy the storage connectivity group settings. The VM can be migrated only within its associated storage connectivity group and host group.
The default storage connectivity group for NPIV connectivity, vSCSI connectivity and for SSP is created when PowerVC recognizes that the needed resource is needed for the management. After you add the storage providers and define the storage templates, you can create storage volumes.
Only data volumes must be created manually. Boot volumes are handled by PowerVC automatically. When you deploy a partition, IBM PowerVC automatically creates the boot volumes and data volumes that are included in the images.
Shared storage pool
SSPs are supported on hosts that are managed either by HMC or NovaLink. The SSP is configured manually, without PowerVC (creation of a cluster on VIO servers, adding disks to the pool). After that PowerVC will discover the SSP when it discovers the VIOSs. When a VM is created PowerVC will create logical units (LUs) in the SSP, then PowerVC instructs the VIOS to map these LUs to the VM (VIO client partition) as a vSCSI device.
---------------------------------------------------------------
NETWORK
When we set up PowerVC for use, it is a good habit to create all networks that will be needed for future VM creation. (These VLANs need to be added on the switch ports that are used by the SEA).
PowerVC requires that the SEAs are created before it starts to manage the systems. If you are using SEA in sharing/auto mode with VLAN tagging, create the SEA without any VLANs that are assigned on the Virtual Ethernet Adapters. PowerVC adds or removes these VLANs on the SEAs when necessary (at VM deletion and creation).
For example:
For example:
- If you deploy a VM on a new network, PowerVC adds the VLAN on the SEA.
- If you delete the last VM of a specific network (on a host), the VLAN is automatically deleted.
- If the VLAN is the last VLAN that was defined on the Virtual Ethernet Adapter, this VEA is removed from the SEA.
When a network is created in PowerVC, a SEA is automatically chosen from each registered host. If the VLAN does not exist yet on the SEA, PowerVC deploys that VLAN to the SEA. To manage PowerVM, PowerVC requires that at least one SEA is defined on the host. PowerVC supports the use of virtual switches in the system. These are good to separate a single VLAN across multiple distinct physical networks. (To split a single VLAN across multiple SEAs, break those SEAs into separate virtual switches.)
In environments with dual VIOSs, the secondary SEA is not shown except as an attribute of the primary SEA. If VLANs are added manually to SEA after the host is managed by PowerVC, the new VLAN is not automatically discovered by PowerVC. To discover a newly added VLAN, run the "Verify Environment" function.
PowerVC supports Dynamic Host Configuration Protocol (DHCP) or static IP address assignment. When DHCP is used, PowerVC is not aware of the IP addresses of the VMs that it manages. PowerVC also supports IP addresses by using hardcoded (/etc/hosts) or Domain Name Server (DNS)-based host name resolution.
Since Version 1.2.2, PowerVC can dynamically add a network interface controller (NIC) to a VM or remove a NIC from a VM. PowerVC does not set the IP address for new network interfaces that are created after the machine deployment. Any removal of a NIC results in freeing the IP address that was set on it.
---------------------------------------------------------------
PROJECTS
A project (sometimes called as a tenant) is a unit of ownership. The "ibm-default" project is created during installation, but PowerVC supports additional projects for resource segregation. By creating several projects we can separate virtual machines, volumes, and images from others. (For example a specific virt. machine can be seen only in one project, and it is not possible to see that in another project.) Other components of PowerVC, such as storage connectivity groups and compute templates do not belong to a specific project (these are generally available in each project). Only users with a role assignment can work with the resources of a specific project.
After creating a project, you are automatically assigned the admin role for that project. This allows you to assign additional roles to users in that project.
Role assignments are specific to a project. For example, a user could have the vm_manager and storage_manager roles in one project and only the viewer role in another project. Users can only log in to one project at a time. If they have a role on multiple projects, they can switch to other projects. When users log in to a project they will only see resources, messages etc. that belong to that project. They will not see resources that belong to other projects.
OpenStack does not support moving resources from one project to another. You can move volumes and virtual machines by unmanaging them and then remanaging them in the new project. All resources within a project must be deleted or unmanaged before the project can be deleted. The ibm-default project cannot be deleted.
openstack project create create and manage projects
openstack role add ... assign roles to users in a project
------------------------------------------
PROJECTS
A project (sometimes called as a tenant) is a unit of ownership. The "ibm-default" project is created during installation, but PowerVC supports additional projects for resource segregation. By creating several projects we can separate virtual machines, volumes, and images from others. (For example a specific virt. machine can be seen only in one project, and it is not possible to see that in another project.) Other components of PowerVC, such as storage connectivity groups and compute templates do not belong to a specific project (these are generally available in each project). Only users with a role assignment can work with the resources of a specific project.
After creating a project, you are automatically assigned the admin role for that project. This allows you to assign additional roles to users in that project.
Role assignments are specific to a project. For example, a user could have the vm_manager and storage_manager roles in one project and only the viewer role in another project. Users can only log in to one project at a time. If they have a role on multiple projects, they can switch to other projects. When users log in to a project they will only see resources, messages etc. that belong to that project. They will not see resources that belong to other projects.
OpenStack does not support moving resources from one project to another. You can move volumes and virtual machines by unmanaging them and then remanaging them in the new project. All resources within a project must be deleted or unmanaged before the project can be deleted. The ibm-default project cannot be deleted.
openstack project create create and manage projects
openstack role add ... assign roles to users in a project
------------------------------------------
Environment checker:
This is a single interface to confirm that resources (Compute, Storage, Network etc.) registered in PowerVC meet the configuration and hardware level requirements.
The environment checker tool verifies these (and more):
- Management server has the required resources in terms of memory, disk space etc.
- Hosts and storage are the correct machine type and model.
- The allowed number of hosts is not exceeded.
- The correct level of Virtual I/O Server is installed on your hosts.
- The Virtual I/O Server is configured correctly on all of your hosts.
- Storage and SAN switches are configured correctly.
…
---------------------------------------------------------------
Commands:
powervc-diag collects diagnostic data from PowerVC
powervc-log enables or disables debug log level
powervc-log-management enables to view and modify the settings for log management
powervc-register registers an OpenStack supported storage provider or fabric.
powervc-services stop, start, PowerVC services and checks status
stop stops PowerVC (all services)
start starts PowerVC (all services)
status show status of all services
powervc-config it has many subcommads to configure PowerVC
purge removes all events that are stored in the Panko database
general ifconfig change the host name or IP address of the PowerVC server
storage storage related configurations
compute configure many different options (like mover service partition IP, VLAN related configs on VMs etc.)
web inactivity-timeout configure idle timeout in UI before user is prompted and logged out (0 or less disables the timer)
reauth-warn-time user asked for pw before token expires, inputting the pw obtains a new token. (0 or less disables timer)
powervc-image image related configs
config displays or changes the command configuration properties
import imports an uncompressesed deployable image from OVA to PowerVC
export exports a deployable image from PowerVC to a local OVA
list lists the deployable images managed by PowerVC
Switching to LDAP and switching back:
powervc-config identity repository <--show if OS or LDAP authentication is actually in use
powervc-config identity repository --user root --type os <--switch back to OS authentication (old users are kept)
powervc-config identity repository -t ldap … <--switching to LDAP authentication
Enabling debug:
powervc-config general debug <--checking each service if debug is enabled or not for that service
powervc-config identity debug --enable --restart <--enabling debug for identity service
powervc-config identity debug --disable --restart <--disabling debug for identity service
powervc-backup --targetdir /powervc/backup <--creating a backup
powervc-restore --targetdir /powervc/backup/<backup dir>/ <--restoring a powervc backup
/opt/ibm/powervc/version.properties <--conatins version infor and other properties of PowerVC
https://ip_address/powervc/version <--gets the current version of PowerVC
---------------------------------------------------------------
PowerVC backup
1. mount a remote nfs share, where backup will be saved
[root@powervc ~]# mount nim01:/repository/BACKUP /mnt
mount.nfs: Remote I/O error
Remote I/O error can happen, because PowerVC is running on Linux (Red Hat) and it tries NFS 4 by default which is not configured at AIX side, choose NFS 3 during mount:
[root@powervc ~]# mount -o vers=3 nim01:/repository/BACKUP /mnt
2. start the backup, which takes about 5 mins and during that time Web interface is not working
[root@powervc ~]# powervc-backup --targetdir /mnt
Continuing with this operation will stop all PowerVC services. Do you want to continue? (y/N):y
Stopping PowerVC services...
Backing up the databases and data files...
Database and file backup completed. Backup data is in archive /mnt/20180622105847651966/powervc_backup.tar.gz
Starting PowerVC httpd services...
Starting PowerVC bumblebee services...
Starting PowerVC services...
PowerVC backup completed successfully.
No comments:
Post a Comment