dropdown menu

POWERVM - BASICS

Basics

PowerVM (Power Virtual Machine):

PowerVM formerly known as Advanced Power Virtualization, it is the virtualization solution for AIX.
PowerVM has 3 editions:

Express Edition: Hypervisor, DLPAR (3 servers), VIO (1 server), IVM, NPIV
Standard Edition: +DLPAR (254 servers), + VIO (2 servers), +HMC, +Multiple Shared Processor Pools, + Shared Storage Pools
Enterprise Edition: ++Active Memory Sharing, ++Live Partition Mobility


Intergrated Virtualization Manager (IVM)
For a smaller environment, not all functions of an HMC are required, and the deployment of additional HMC hardware may not be suitable, so IBM has developed the IVM, a hardware management solution that performs a subset of the HMC features for a single server, avoiding the need for a dedicated HMC server.
IVM manages standalone servers so a second server managed by IVM would have its own instance of the IVM. With the subset of HMC server functionality, IVM provides a solution that enables the administrator to quickly set up a system. The IVM is integrated within the Virtual I/O Server product.


POWER Hypervisor
POWER Hypervisor is the foundation of IBM PowerVM. It is a firmware layer sitting between the hosted operating systems and the server hardware, and it is always activated.
It delivers functions that enable capabilities: dedicated or micro partitioning, virtual processors, virtual ethernet- scsi- fibre channel- adapters and virtual consoles.


LPAR - Dedicated processors

Dedicated processors are whole processors that are assigned to dedicated-processor partitions (LPARs). The minimum processor allocation for an LPAR is one (1) whole processor, and can be as many as the total number of installed processors in the server.
Each processor is wholly dedicated to the LPAR. It is not possible to mix shared processors and dedicated processors in the same partition.


Micro-Partitioning
Micro-Partitioning is the ability to distribute the processing capacity of one or more physical processors among one or more logical partition.


Shared-processor pools
In POWER5-based servers, a physical shared-processor pool is a set of physical processors that are not dedicated to any logical partition. Micro-Partitioning technology coupled with the POWER Hypervisor facilitates the sharing of processing units between micro-partitions.

Multiple Shared-Processor Pools (MSPPs)is a capability supported on POWE6. This capability allows a system administrator to create a set of micro-partitions with the purpose of controlling the processor capacity that the set of micro-partitions can consume from the physical shared-processor pool. The set of micro-partitions form a unit and this can be managed for example how much processor capacity it can use.

On all Power Systems supporting Multiple Shared-Processor Pools, a default Shared-Processor Pool is always automatically defined. The default Shared-Processor Pool has a pool identifier of zero (SPP-ID = 0) The default behavior of the system, with only SPP0 defined, is the current behavior of a POWER5 server with only a physical shared-processor pool defined. Micro-partitions are created within SPP0 by default, and processor resources are shared in the same way.

If several partitions from different shared processor pools are competing for additional resources, the partitions with the highest weight will be served first. You must therefore define a partition’s weight based on the weight of partitions in other shared processor pools.


Shared Storage Pool
A shared storage pool is a pool of SAN storage devices assigned to multiple Virtual I/O Servers. It is based on a cluster of Virtual I/O Servers. When using shared storage pools, the Virtual I/O Server provides storage through logical units (file backed storage device) that are assigned to client partitions and it appears as a virtual SCSI disk in the client partition. Shared Storage Pools are using thin provisioning.


Storage Pool vs Volume Group
The IVM and HMC environments present 2 different interfaces for storage management under different names. Storage Pool interface under IVM is essentially the same as LVM under HMC. (These are used sometimes interchangeably.) So volume group can refer to both volume groups and storage pools, and logical volume can refer to both logical volumes and storage pool backing devices.


Active Memory Expansion:
Active Memory Expansion is the ability to expand the memory available to an AIX partition beyond the amount of assigned physical memory. Active Memory Expansion compresses memory pages (so it generates CPU load) to provide additional memory capacity for a partition. (It is a Power7 feature.) Starting with POWER7+,memory page compression and decompressionis offloaded to a hardware accelerator.


Active Memory Sharing:
Active Memory Sharing (AMS) enables the sharing of a pool of physical memory among partitions on a single Power server (Power 6 or later), helping to increase memory utilization and drive down system costs.


Active Memory Deduplication:
To optimize memory use, Active Memory Deduplication avoids data duplication in multiple distinct memory spaces. On traditional LPARs, multiple identical data are saved across different positions in main memory. Active Memory Deduplication combines the data in just one physical memory page and frees the other chunks with identical data. The result is multiple logical memory pages pointing to the same physical memory page, thus saving memory space. (It is available on LPARs using Active Memory Sharing.) 


Acitive Memory Mirroring:
It is called sometimes system firmware mirroring. Active Memory Mirroring for the hypervisor is designed to mirror the main memory that is used by the system firmware to ensure greater memory availability . When enabled, an uncorrectable error that results from a failure of main memory used by the system firmware will not cause a system-wide outage. The system maintains two identical copies of the system hypervisor in memory at all times.

Virtual network: 
Enables interpartition communication without assigning a physical network adapter to each partition. If the virtual network is bridged (SEA), partitions can communicate with external networks. 

Virtual switch: 
An in-memory, hypervisor implementation of a layer-2 switch.

VLAN:
A VLAN (virtual local area network) is a method to logically segment a physical network so that layer 2 connectivity is restricted to members that belong to the same VLAN. This separation is achieved by tagging Ethernet packets with their VLAN ID and then restricting delivery to members of that VLAN. The default VLAN ID for a switch port is referred to as the Port VID (PVID).

The VLAN ID can be added to an Ethernet packet either by a VLAN-aware host, or by the switch in the case of VLAN-unaware hosts. Therefore, ports on an Ethernet switch must be configured with information that indicates whether the host connected is VLAN-aware. For VLAN-unaware hosts, a port is set up as untagged and the switch tags all packets that enter through that port with the Port VLAN ID (PVID). The switch also untags all packets that exit that port before delivery to the VLAN unaware host. A port that is used to connect VLAN-unaware hosts is called an untagged port, and it can be a member of only a single VLAN identified by its PVID. 

Hosts that are VLAN-aware can insert and remove their own tags and can be members of more than one VLAN. These hosts are typically attached to ports that do not remove the tags before the packets are delivered to the host. However, it inserts the PVID tag when an untagged packet enters the port. A port allows only packets that are untagged or tagged with the tag of one of the VLANs that the port belongs to. 

Virtual Ethernet adapters:
A virtual Ethernet adapter allows client partitions to send and receive network traffic without a physical adapter. A virtual Ethernet adapter is created when you connect a partition to a virtual network. TCP/IP communications over these virtual networks are routed through the Hypervisor, by copying the network traffic (packets) directly from the memory of the sender logical partition to the receive buffers of the receiver logical partition 

Virtual Ethernet adapters allow logical partitions within the same system to communicate without having to use physical Ethernet adapters. 

Virtual Ethernet adapters are connected to an IEEE 802.1Q virtual Ethernet switch and this switch allows logical partitions within the same system to communicate with each other. You can use virtual Ethernet adapters without using the Virtual I/O Server.


Virtual network bridges:
At the Enhanced GUI on the HMC a new term is used for the SEA, which is called "Virtual Network Bridge". 

The virtual network bridge (SEA)  connects the internal network traffic to the outside world through a physical network adapter. A virtual network bridge has one or more load groups (trunk adapters). The number of load groups determines the number of virtual Ethernet adapters present on SEA.

A virtual network bridge can be associated with one untagged virtual network and up to 20 tagged virtual networks. When a virtual network is added to an existing bridge, a tagged virtual network is created. When a virtual network is added to a new bridge, it can be added as an untagged network or as a tagged network.

47 comments:

Anonymous said...

Your comment about SPs and VGs, makes it sound as though they're synonymous, and if you're under IVM, just use the SP term. That may have been true when first released, but I believe SPs are being enhanced well beyond VGs now. With capabilities such as shared storage pools, and file-backed-Optical.

Or can I do this with a VG and I'm just not aware of it?

aix said...

It was confusing for me, and after reading some docs, I realized they are very similar. I wrote more on these subjects on this blog: VIO -> VSCSI - Stor. Pool., and VIO -> VSCSI (I would suggest read this last one (http://aix4admins.blogspot.com/2011/06/virtual-scsi-virtual-scsi-is-based-on.html), from the middle, where I create the same vg with mkvg and mksp command.)

I think you made a good point about shared storage pools, and that is possibly a difference. However a Virtual Media Repository (with file-backed-Optical) can be created on a normal vg as well:

padmin@bb-vios2:/home/padmin # lsvg
rootvg

padmin@bb-vios2:/home/padmin # lssp
Pool Size(mb) Free(mb) Alloc Size(mb) BDs Type
rootvg 279552 195584 256 0 LVPOOL

padmin@bb-vios2:/home/padmin # mkvg -vg bbvg hdiskpower0
bbvg

padmin@bb-vios2:/home/padmin # lssp <--the created bbvg is shown as a storage pool as well
Pool Size(mb) Free(mb) Alloc Size(mb) BDs Type
rootvg 279552 195584 256 0 LVPOOL
bbvg 25888 25888 16 0 LVPOOL

padmin@bb-vios2:/home/padmin # mkrep -sp bbvg -size 4G
Virtual Media Repository Created
Repository created within "VMLibrary" logical volume

padmin@atcgvirp2-vios2:/home/padmin # lsrep <--the pool is created on a normal vg
Size(mb) Free(mb) Parent Pool Parent Size Parent Free
4079 4079 bbvg 25888 21792


To be honest, I do not really see why IBM created these this way...if you have any other good idea, you can enlighten me :)

Anonymous said...

Hi,

If Uncapped mode is enabled, whether the processor units has been taken upto desired value or beyond the desired value in the shared processor pool. Please clarify.

Regards,
Siva

Siva said...

Hi,

Does we change the desired value (memory) online without any downtime.?

Regards,
Siva

aix said...

Hi,

It can take beyond the desired value in case of uncapped mode. (That is the main difference between capped and uncapped mode.)

aix said...

Hi, yes it can be done online as a DLPAR operation
Some notes:
- it can't be larger than maximum memory
- should be changed in profile as well, so next reboot will take this new amount of memory

Siva said...

Hi,

There is maximum limit of acquiring processor in the shared free processor pool.

aix said...

Yes, that is true.
"The number of Virtual Processors can not be extended over 10X of the Entitled Capacaty.
(If EC=0,5 then the maximum number of Virtual Processor can not be over 5.) "
In this case it can use up to maximum 5 Physical CPU in uncapped mode.

Siva said...

Hi,

Sorry, had wrongly asked question. Whether we can increase the maximum value online ?

aix said...

Hi, maximum value can't be increased online.
LPAR profile must be changed and reboot is needed (new maximum value must be loaded from profile)

Unknown said...

Hi , I have a scenario here.

My AIX Frame is totally enabled with 8 CPUs.
Partition A assigned with EC=4 and partition B with EC=4 and both are uncapped.
Incase if Partition A need one more cpu which is free in partition B , is possible to be acquired by partion A ?

and if it possible to acquire one more CPU by Partition A , when it required by partition B , will Partition B to take it back again ? as it is Entitled for Partition B

aix said...

Hi,

short answer is yes, when you have uncapped partitions, they can exceed the entitled capacity when resources are available. But they can take it back its own processing capacity form other LPARs when they need it.
There are 2 other things what you should know as well:
1. uncapped weight: Its a value, what you set at LPAR creation, and if it is higher, then LPAR will receive more resources in uncapped mode. (compare to other LPARs which uncapped weight value is lower.)

2. virtual processors: If you set, in your scenario, 4 virtual processors for both LPARs (as desired value), they will not exceed 4 processing unit. This is because 1 virtual processor represents 1 CPU. So, if you want that an LPAR use more CPU if needed in uncapped mode, then you should set virtual processor higher then 4. (For example 5,6,7 or 8 if you would like that 1 LPAR could use all the resources if needed.)

Anonymous said...

hi...since am starting my career as an aix system admin,could you pls say me the basic concepts of vio and most used commands in vio concepts, in troubleshooting part..?? thanx in advance.

aix said...

Hi, on this link you can find more info about this topic: http://aix4admins.blogspot.hu/2011/06/vios-service-package-definitions-fix.html

Anonymous said...

Hi ..
I have system with
Entitled Capacity : 0.50
Online Virtual CPUs : 2
Maximum Virtual CPUs : 16
Minimum Virtual CPUs : 1

could you please suggest what does Entitled Capacity mean

aix said...

Hi, Processing Capacity, Processing Units and Entitled Capacity are basically the same thing. There is some description above, but Entitled Capacity: 0.50 means 50% of the CPU is assigned to that LPAR (which will be distributed on 2 Virtual CPUs in your case).

Anonymous said...

Hi..can i change the uncapped mode to capped and vice versa with dlpar (i.e. with out rebooting partition).

hoping you will answer,
Thank you ..any way

aix said...

Hi, changing capped-uncapped can be done only in the LPAR profile and after that profile activation is needed. So it cannot be changed online with DLPAR.

Anonymous said...

Hi Admin,
Its very good blog and great content , Admin could you update about WPAR .
Thanks
AR

aix said...

I'll try my best...

Anonymous said...

Thanks admin for yor reply,AR

Adeel said...

From where I can find simple VIO's PDF to understand about VIOS and also want VIOS commands

aix said...

IBM Redbooks (or info can be found on this page)

Anonymous said...

can we move virtual i/o slots without down time

Anonymous said...

in the above question by using DLPAR can move virtual adapters

aix said...

Virtual Adapters can be added and removed online.

Anonymous said...

Hi Admin,

Refering to below screen can i know why "EntitledCPU= 1.50" where my LCPU is 4 and "0&1 are in use but 3&4" are 0.0
1) Please advise why 3&4 LCPU are not in user by users it only used by system.
2) why is my Ent is 1.5, when i have 4 LCPU why it shous 1.5, Please explain as i have performance issue need more explination.

Thank you in Advance.

┌─topas_nmon──W=WLM──────────────Host=xxxxxxxxxxx ───Refresh=2 secs───17:46.38─
│ CPU-Utilisation-Small-View ───────────EntitledCPU= 1.50 UsedCPU= 0.007─────
│Logical CPUs 0----------25-----------50----------75----------100
│CPU User% Sys% Wait% Idle%| | | | |
│ 0 17.2 53.1 0.0 29.7|UUUUUUUUssssssssssssssssssssssssss > |
│ 1 26.7 46.7 0.0 26.6|UUUUUUUUUUUUUsssssssssssssssssssssss > |
│ 2 0.0 26.0 0.0 74.0|sssssssssssss > |
│ 3 0.0 37.9 0.0 62.1|ssssssssssssssssss > |

aix said...

Hi,
1. I guess, because there are no more user processes :) If needed more, I think it will be dispatched there as well.

2. You should understand the words Entitled Capacity, Virtual Processor, Logical Processor. For a starting point please check this link: http://aix4admins.blogspot.hu/2013/04/virtual-processors-and-entitled.html

friend lan said...

hi admin,
if not use vios to allocated disk resource to lpar, my server has 2 sas raid controllers then can i create how many lpars on it?

aix said...

Hi, for an LPAR with local disks, needs a storage controller, so I would say 2...but check technical documentation of your model as well.

santosh said...

hi Balazs,
i am a very big fan of ur blog. It is leading me to wonderfull path.
i am beginner in aix.
i have a question.
Is it possible to rename LPAR in AIX?

aix said...

Hi, yes, it is possible. On HMC if you choose LPAR properties you can edit the LPAR name field. Or you can do it in HMC command line as well. (You asked about LPAR name chenge, not the hostname change, there is a difference between these.)

Anonymous said...

Thanks a ton for this blog, you are a life saver :)

I have a query here, how can the max vCPUs be 16 when EC is 0.5? It should not be more than 10*EC = 5 in this case.

Please correct me if am wrong.

Thanks,
Varun

aix said...

Yes, you are right it cannot be more that 10*EC (or in case Power7+, it is 20*EC) as desired value, however this is just a setting here for maximum value. I guess for maximum value you can type anything, but it is useless to write more than what is allowed by the system. (as in this case)

Anonymous said...

Hi Balazs. Quick question.

Can I virtualize a Power 5 server without spending a single dollar in licensing fees ??

I understand IVM is part of VIOS and is activated when VIOS is installed.
Is the VIOS software free? If so, where do I download it from?

thanks man

Anonymous said...

I forgot to add.. I only need two LPARs, so IVM is ok for me..

aix said...

Hi, theoretically you can do that, as I have read IVM comes with VIOS (however I have never tested it...we have HMCs). As far as I know if you have a maintenance contract with IBM, then you are able to download base level images (with a special registration), otherwise only updates are free at IBM fix central.

ra said...

Hello Balazs,

do you have any idea about the different types of VIO licenses and what are the differences between them >

thanks in advance...

Regards
Rahul

Manoj Suyal said...

Hi,

All you need to have is VIOS media with you install vios on system and assign IP on that server.
Access IP using http and proceed with LPAR creation.

Tested in Blades ( JS12) should work for your system as well.

Anonymous said...

I think you can change capped-uncapped and weight factor using DLPAR, I am in the HMC right now, using power VM standard, in a P7 machine, it is letting me change it .

Anonymous said...

I think the reasoning behind storage pools is that you can use/create/modify them in HMC GUI and thus assign storage of any kind to lpars.
If I recall correctly this part of HMC GUI is on managed system level.

Anonymous said...

how to know managemachine supports howmany cpus

oyao aixblogspot.com said...

p5 manage machine supports howmany no. of cpus

Unknown said...

if am wrong - please correct me . DLPAR to increase cpu is only on capped partitions not for the uncapped partition, since it does automatically. Thank you Sir

Anonymous said...

I could not understand active memory expansion. Like according to above we can increase the value of physical memory capacity. for that we should be have physical memory available right??

Raj said...

Hi,

How to see the Physical/Virtual resources of an AIX LPAR from the VIO console ? Thanks ..!

Anonymous said...

What is vio and their functionality please explain it?