dropdown menu


Oracle - Tuning

Resource Limits

ulimits (smit chuser or edit /etc/security/limits to create a stanza for Oracle/grid user and set -1 (unlimited) for everything except core.

 data = -1
 stack = -1
 fsize_hard = -1
 cpu_hard = -1
 data_hard = -1
 stack_hard = -1
 fsize = -1
 nofiles = -1
 cpu = -1
 rss = -1

Soft File Descriptors  at least 1024 KB
Hard File Descriptors  at least 65536 KB

maxuproc:m aximum number of PROCESSES allowed per user (smit chgsys). Set this value to 16386 (16k)
ncargs: 128


IO (FC adapter, disk)

FC adapter:
max_xfer_size should be increased from default 1MB to 2MB. The default adapter DMA memory size is 16 MB which increases to 128 MB when a non default max_xfer_size is used. Larger DMA size can be important for performance with many concurrent large block I/Os.

num_cmd_elems might need to be increased if fcstat -e reports a persistent nonzero value for No Command Resource Count. If fcstat –e reports a persistent, non-zero value for No DMA Resource Count contact support.

queue wait and queue overflow detected through iostat –Dl might indicate a need to increase queue depth. max_transfer might need to be adjusted upward depending on the largest I/O requested by Oracle (A typical starting point for Oracle on AIX is 0x100000 (1 MB).)

As of AIX 5.3, the optimal setting for LTG size is dynamically calculated during the varyonvg process and does not need to be manually set. The varyonvg '-M'
parameter should not be used as it will over-ride the dynamically calculated LTG size. It is recommended that all hdisks within a given VG have the same 'max_transfer' (and other attribute) values. In order to change hdisk attribute values, any associated filesystems should be unmounted and the VG varied off.

ASM considerations for standalone Oracle 11gR2:
ASM will use asynchronous I/O by default, so filesystemio_options=ASYNC (default) is appropriate. For clustered ASM (e.g. RAC) configurations, SCSI reservation must be disabled on all ASM hdisk and hdiskpower devices (e.g. reserve_policy=no_reserve). The standalone use of ASM, hdisks and hdiskpower devices does not need to have SCSI reservation disabled.



VG should be created as scalable VG. If ASM is not used, max interpolicy striping (pp spreading) is suggested when logical volumes are created. To get the most benefit from spreading physical partitions across the LUNs, use a small physical partition size, for example, 32 MB or 64 MB.

Buffered file I/O on JFS2
The default filesystemio_options=ASYNC, which means all data spaces, redo log file systems, and control file systems are using the kernel buffers rather than writing directly to disk. In this case, it does not matter whether redo log file systems and control file systems are 512 b or 4 KB block size file systems. Oracle on AIX best performance is, however, usually achieved using CIO (though there are exceptions).

Concurrent I/O (CIO) on JFS2
Set the Oracle parameter filesystemio_options=SETALL, or mount the filesystems with the CIO option. It is not necessary to both SETALL and mount filesystems with the CIO option, although no harm is done either way. Metalink note: 272520.1 indicate that mounting with CIO is needed, while IBM believes it is not needed. IBM is working with Oracle to fix the metalink note.

If using CIO with SETALL, CIO mount or both, you must create separate file systems for redo logs and control files (or a single filesystem for both), with an agblksize of 512 rather than the default 4 KB. The ioo parameters aio_fsfastpath and posix_aio_fsfastpath accelerate CIO. It is enabled by default in AIX 6.1 and 7.1.

With AIX 6.1, IBM introduced a new open flag O_CIOR which is same as O_CIO, but this allows subsequent open calls without CIO. The advantage of this enhancement is that other applications like cp, dd, cpio, dbv can access database files in read only mode without having to open them with CIO. Starting with Oracle when AIX 6.1 is detected, Oracle will use O_CIOR option to open a file on JFS2. Therefore you should no longer mount the filesystems with mount option "-o cio". (
The mount option noatime, suggested for Oracle 10g binaries is fixed in

IBM mount advice for database files:
- Data files: Use CIO filesystemio_options=SETALL, and default agblksize (4k); mount with no options.
- Redo logs: Create with agblksize of 512 and mount with no options. With SETALL, Oracle is doing direct I/O for Redo logs.
- Control files: Create with agblksize of 512 and mount with no options. With SETALL, Oracle is doing direct I/O for control files.
- Archive logs: Mount -o rbrw . Do not use CIO; use the jfs2 rbrw option
- Dumps: Mount –o rbrw

General rules:
- All vgs scalable
- LUNs no larger than 500G
- Preferred number of LUNs in a vg: 10 and more (with exceptions, see later), minimum 4 for extra small DBs/vgs (like 100GB)
- PP size preferably no larger than 32MB (16MB for smaller LUNS than 250G, 32MB for 250GB to 500GB)
- All LVs with "maximum allocation"
- All jfs2 filesystems with INLINE log
- Filesystems for online redo logs formatted with 512 fragment size

Rules for high volume DBs
- Extra vgs for online log and mirror log – INSToriglogvg, INSTmirrlogvg – small LUNs, minimum number of LUNS in vg at least 4
- Extra vgs for highest volume tablespaces, same rules as for general vgs apply – 1 filesystem per vg


File System Options

The DIO and CIO features included in AIX improve file system performance to a level comparable to raw logical volumes. Before Oracle Database 11g, DIO and CIO could not be enabled at the file level on JFS/JFS2. Therefore, the Oracle home directory and data files had to be placed in separate file systems for optimal performance. The Oracle home directory was placed on a file system mounted with default options, with the data files and logs on file systems mounted using the dio or cio options.

With Oracle Database 11g, you can enable DIO and CIO on JFS/JFS2 at the file level. You can do this by setting the FILESYSTEMIO_OPTIONS parameter in the server parameter file to setall or directIO. This enables CIO on JFS2 and DIO on JFS for all data file Input-Output. Because the directIO setting disables asynchronous
Input-Output it should normally not be used. As a result of this 11g feature, you can place data files on the same JFS/JFS2 file system as the Oracle home directory and still use DIO or CIO for improved performance.

However you should still place Oracle Database logs on a separate JFS2 file system for optimal performance. The optimal configuration is to create the file system using the agblksize=512 option and to mount it with the cio option.  Redo is a natural bottleneck for high-update databases because Oracle redo disk must accept the sum of all disk update rates. After redo and disks are optimized the only way to relieve redo bottlenecks is faster redo storage.

For improved performance, create separate file systems for redo logs and control files (or a single file system for both), with an agblksize of 512 bytes rather than the default of 4 KB.

Note: To use the Oracle RAC option, you must place data files on an ASM disk group or on a GPFS file system. You cannot use JFS or JFS2. DIO is implicitly enabled when you use GPFS.


Asynchronous I/O

Asynchronous I/O (AIO) allows a program to initiate an I/O operation then continue with other work in parallel to the I/O operation. Oracle Database 12c often requires multiple server and user processes running at the same time. Therefore Oracle Database 12c takes full advantage of AIO services provided by AIX. AIO is implemented with AIO server processes. The configuration values of: minservers, maxservers and maxreqs control the AIO server configuration of AIX.

AIO kernel extensions are loaded at system boot (always loaded), AIO servers stay active as long as there are service requests, and the number of AIO servers is dynamically increased or reduced based on demand of the workload. The aio_server_inactivity parameter defines after how many seconds idle time an AIO server will exit. AIO tunables are now based on logical CPU count, and hence it is usually not necessary to tune minservers, maxservers, and maxreqs as in the past.

For Oracle Database 12c, the database defaults to asynchronous I/O (AIO) enabled and concurrent I/O (CIO) disabled. In general, a good starting point is to set the filesystemio_options=setall, in your init*.ora configuration file. This setting will enable AIO (which is the default) and CIO operation. CIO operation is built
upon direct I/O (DIO) with the additional function of inode locking. Note, there may be workloads (eg. sequential reads) where cached I/O performs better than CIO.
When using CIO/DIO, the Oracle setting of DB_FILE_MULTIBLOCK_READ_COUNT (the maximum number of blocks read in one I/O operation during a sequential scan) needs to be considered. Also, the alignment of the database blocksize and the file system block size (agblksize) has to be considered.

From Oracle Database 11g Release 2 version and later, Oracle opens the files using "O_CIOR" which is similar to "O_CIO", but allows subsequent open calls without CIO, so that you no longer need to mount the JFS2 filesystems with mount option "-o cio" and other OS tools and third part tools can access the database files without any issues.

To display the number of asynchronous Input-Output servers running, enter the following commands as the root user:
# pstat -a | grep -c aios
# ps -k | grep aioserver

Check the number of active asynchronous Input-Output servers periodically, and change the values of the minservers and maxservers parameters if required. The
changes take place when the system is restarted.


IOCP (I/O Completion Ports)

On AIX on POWER systems, enable I/O completion ports (IOCP) to ensure successful database and grid infrastructure installation.
To check if the IOCP module is enabled, run the following command and look for status "Available" in the output,

$ lsdev |grep iocp
Iocp0 Available I/O Completion Ports.

If IOCP is in "Defined" state, enable it (using "smitty").

Activate iocp:
# lsdev -Cc iocp; lsattr -El iocp0
# mkdev -l iocp0; chdev -l iocp0 -P -a autoconfig='available'


Oracle Block Size

During read operations, entire operating system blocks are read from the disk. If the database block size is smaller than the operating system file system block size, then Input-Output bandwidth is inefficient. If you set Oracle Database block size to be a multiple of the file system block size, then you can increase performance by up to 5 percent. The DB_BLOCK_SIZE initialization parameter sets the database block size. However, to change the value of this parameter, you must re-create the database. To see the current value of the DB_BLOCK_SIZE parameter, run the SHOW PARAMETER DB_ BLOCK_SIZE command in SQL*Plus.

You can configure Oracle Database block size for better Input-Output throughput. On AIX, you can set the value of the DB_BLOCK_SIZE initialization parameter to between 2KB and 32 KB, with a default of 4 KB. For databases on raw partitions, Oracle Database block size is a multiple of the operating system physical block size (512 bytes on AIX). Oracle recommends smaller Oracle Database block sizes (2 KB or 4 KB) for online transaction processing or mixed workload environments and larger block sizes (8 KB, 16 KB, or 32 KB) for decision support system workload environments.


Log Archive Buffers

By increasing the LOG_BUFFER size, you may be able to improve the speed of archiving the database, particularly if transactions are long or numerous. Monitor the log file Input-Output activity and system throughput to determine the optimum LOG_BUFFER size. Tune the LOG_BUFFER parameter carefully to ensure that the overall performance of normal database activity does not degrade.


Server Side Caching

The Server Side Caching is a new feature introduced in AIX 7.1 TL04 SP02 and AIX 7.2. This feature is supported to use with Oracle Database to improve the performance of read I/O intensive workloads on AIX. Server-side caching provides the capability to cache the application data stored in SAN to Solid State Devices (SSD) or Flash Storage LUNs or Virtual Disks provided by VIOS on the AIX server. After Server Side Caching is enabled in AIX, all the read I/O requests are first redirected to the caching area created with the fast SSDs or Flash Storage or VIOS virtual disk on the server. This feature can be enabled or disabled dynamically, no reboot is required and changes are transparent to the running application or workload. This works only with Oracle Database Non-RAC environment.


Write Behind

The write behind feature enables the operating system to group write Input-Output together, up to the size of a partition. You can improve performance by doing this,
because the number of Input-Output operations is reduced. The file system divides each file into 16 KB partitions to increase write performance, limit the number of dirty pages in memory, and minimize disk fragmentation. The pages of a particular partition are not written to disk until the program writes the first byte of the next 16KB partition. To set the size of the buffer for write behind to eight 16 KB partitions, enter the following command:
# /usr/sbin/vmo -o numclust=8


Sequential Read Ahead

Note: The information in this section applies only to file systems, and only when neither DIO nor CIO are used.

The VMM anticipates the need for pages of a sequential file. It observes the pattern in which a process accesses a file. When the process accesses two consecutive pages of the file, the VMM assumes that the program continues to access the file sequentially, and schedules additional sequential reads of the file. These reads overlap the program processing and make data available to the program faster. The following VMM thresholds, implemented as kernel parameters, determine the number of pages it reads
- minpgahead: it stores the number of pages read ahead when the VMM first detects the sequential access pattern.
- maxpgahead: it stores the maximum number of pages that VMM reads ahead in a sequential file.

Set the minpgahead and maxpgahead parameters to appropriate values for an application. The default values are 2 and 8 respectively. Use the vmo command to change these values. You can use higher values for the maxpgahead parameter in systems where the sequential performance of striped logical volumes is of paramount importance.


Disk IO Pacing

Disk IO pacing is an AIX mechanism that enables to limit the number of pending IO requests to a file. This prevents disk IO intensive processes from saturating the CPU. Therefore, the response time of interactive and CPU-intensive processes does not deteriorate. You can achieve disk IO pacing by adjusting two system parameters: the high-water mark and the low-water mark. When a process writes to a file that has a pending high-water mark IO request, the process is put to sleep. The process wakes up when the number of outstanding IO requests falls lower than or equals the low-water mark.

You can use the smit command to change the high and low-water marks. Determine the water marks through trial-and-error. Use caution when setting the water marks, because they affect performance. Tuning the high and low-water marks has less effect on disk Input-Output larger than 4 KB.

You can determine disk IO saturation by analyzing the result of iostat, in particular, the percentage of iowait and tm_act. A high iowait percentage combined
with high tm_act percentages on specific disks is an indication of disk saturation. (A high iowait alone is not necessarily an indication of an Input-Output bottleneck.)


IOO tunables j2_nBufferPerPagerDevice and j2_dynamicBufferPreallocation

Do not change these unless there is a high delta in vmstat –v external pager filesystem I/Os blocked with no fsbuf. If this value is high, first increase
j2_dynamicBufferPreallocation from 16 (16k slabs) to 32; monitor. If increasing this does not help, then consider raising the value of j2nBufferPerPagerDevice
which is the starting value for dynamic buffer allocation.

Do not change AIX restricted tunables without the advice from IBM AIX support. In AIX 6.1 j2_nBufferPerPagerDevice is a restricted tunable, while j2_dynamicBufferPreallocation is not.

Here are some default values for three ioo parameters:
- j2_dynamicBufferPreallocation=128
- numfsbufs=1024 (legacy jfs)
- maxpgahead=16 (legacy jfs)


Resilvering (mirroring) with Oracle Database

If you disable mirror write consistency for an Oracle data file allocated on a raw logical volume, then the Oracle Database crash recovery process uses resilvering to
recover after a system failure. This resilvering process prevents database inconsistencies or corruption.

During crash recovery, if a data file is allocated on a logical volume with multiple copies, then the resilvering process performs a checksum on the data blocks of all the copies. It then performs one of the following:
- If the data blocks in a copy have valid checksums, then that copy is used to update the copies that have invalid checksums.
- If all copies have blocks with invalid checksums, then the blocks are rebuilt using the redo log file.

On AIX, the resilvering process works only for data files allocated on raw logical volumes for which mirror write consistency is disabled. Resilvering is not required for data files on mirrored logical volumes with mirror write consistency enabled, because mirror write consistency ensures that all copies are synchronized. If the system fails where which mirror write consistency was disabled, then run the syncvg command to synchronize the mirrored logical volume before starting Oracle Database.

Note: If a disk drive fails, then resilvering does not occur. You must run the syncvg command before you can reactivate the logical volume. Oracle supports resilvering for data files only. Do not disable mirror write consistency for redo log file


Paging space

Oracle documentation suggests the following values as a starting point for an Oracle Database:

RAM                     Swap Space
Between 1 GB and 2 GB     1.5 times the size of RAM
Between 2 GB and 16 GB     Equal to the size of RAM
More than 16 GB             16 GB



In general, AIX support suggests AIX 7.1 defaults for Oracle.


Oracle Large Page Usage

The general recommendation for most Oracle databases on AIX is to utilize 64KB page size and not 16MB page size for the SGA.

AIX 6.1 and 7.1 support three or four page sizes, depending on the hardware: 4 KB (default), 64 KB (medium), 16 MB (large), and 16 GB(huge).
Page sizes 64 KB and 16 MB have been shown to benefit Oracle performance by reducing kernel lookaside processing to resolve virtual to physical addresses. Oracle 11g uses 64 KB pages for dataspaces by default. Oracle Automatic Memory Management (AMM) uses the 64KB page size by default for the SGA and database (with the exception of the TNS Listener). This is the suggested value, since it has been found that 64 KB pages yield nearly the same performance benefit as 16 MB pages and require no special management.

64 KB page size for data, text, and stack regions is useful in environments with a large (for example. 64 KB+) SGA and many online transaction processing (OLTP) users. For smaller Oracle instances, 4 KB is sufficient for data, text, and stack. 64 KB page use for data, text, and stack is implemented separately from 64 KB pages for the SGA, and is done by means of an environment variable exported on behalf of the Oracle user. AME by default uses 4k page size.


SGA tuning

LOCK_SGA = FALSE is the default, this means that the SGA is not pinned in memory. AIX performance support generally suggests not to pin SGA. This is the suggested value, since it has been found that 64 KB pages provide nearly the same performance benefit as 16 MB pages, require no special management and minimize potential of negative impact of incorrectly configuring SGA size.

Some additional info:
Oracle versions prior to will allocate only two types of pages for the SGA, 4KB and 16MB. The SGA initialization process during the startup of the instance will try to allocate 16MB pages for the shared memory if LOCK_SGA is set to TRUE. If the LOCK_SGA is set to FALSE, the 4KB page will be used and no pinning will occur.

The primary motivation for considering the pinning of SGA memory is to prevent Oracle SGA from ever being paged out. In a properly tuned Oracle on AIX environment there should not be any paging activity to begin with, so SGA related pages should stay resident in physical memory even without explicitly pinning them. In improperly configured or tuned environments where the demand for computational pages exceeds the physical memory available to them, SGA pinning will not address the underlying issue and will merely cause other computational pages (e.g. Oracle server or user processes) to be paged out. This can potentially have as much or more impact on overall Oracle performance as the paging of infrequently used SGA pages.

When we say that memory is "pinned" it actually means that the page table prohibits page stealing and swapping. In other words the page stealing daemon can not throw pages from this page table out and replace them with other pages.

If not done properly, Oracle SGA pinning and/or the use of large pages can potentially result in significant performance issues and/or system crashes. And, for many Oracle workloads, SGA pinning is unlikely to provide significant additional benefits. It should therefore only be considered where there is a known performance issue that could not be addressed through other options, such as VMM parameter tuning.

You can determine the SGA size by running the ipcs command as the oracle user.

Use the svmon command to monitor the use of pinned memory during the operation of the system. Oracle Database attempts to pin memory only if the LOCK_SGA parameter is
set to true. If the SGA size exceeds the size of memory available for pinning, then the portion of the SGA exceeding these sizes is allocated to ordinary shared memory.

svmon reports an "available" metric. This metric can be used to more easily determine how much remaining memory is available to applications. The available metric reports the amount additional amount of physical memory that can be used for applications without incurring paging. When the amount of available memory gets low, this is an indication that the system is close to paging.

# svmon -G -O unit=auto
Unit: auto
               size       inuse        free         pin     virtual  available   mmode
memory        2.00G     578.04M       1.44G     430.34M     463.48M      1.47G     Ded
pg space    512.00M       4.10M

               work        pers        clnt       other
pin         354.30M          0K       14.3M       61.8M
in use      463.48M          0K     114.56M


Oracle process memory footprint

The AIXTHREAD_SCOPE environment variable can be used for control if an AIX process runs with process-wide contention scope (the default) or with system-wide contention scope. System-wide contention scope significantly reduces the memory required for each database process. AIX operates most effectively with Oracle Database 12c and Oracle RAC when using system-wide contention scope (AIXTHREAD_SCOPE=S). Both AIX 7.1 and AIX 6.1 specify the default environmental variable of AIXTHREAD_SCOPE=S (1:1). Oracle recommends system wide scope (AIXTHREAD_SCOPE=S) so this environmental variable is no longer required to be specifically set.

Sone additional info:
The default value of the AIXTHREAD_SCOPE environment variable is P, which specifies process-wide contention scope. When using process-wide contention scope, Oracle threads are mapped to a pool of kernel threads. When Oracle is waiting on an event and its thread is swapped out, it may return on a different kernel thread with a different thread ID. Oracle uses the thread ID to post waiting processes, so it is important for the thread ID to remain the same. When using systemwide contention scope, Oracle threads are mapped to kernel threads statically, one to one. For this reason, Oracle recommends that you use systemwide contention. The use of systemwide contention is especially critical for Oracle Real Application Clusters (Oracle RAC) instances.

If you set systemwide contention scope, then significantly less memory is allocated to each Oracle process.

Bourne, Bash, or Korn shell:
Add to the ~/.profile or /usr/local/bin/oraenv script: AIXTHREAD_SCOPE=S; export AIXTHREAD_SCOPE

C shell:
Add to the ~/.login or /usr/local/bin/coraenv script: setenv AIXTHREAD_SCOPE S



In the initial AME implementation 64k pages were not supported when AME was enabled which can have a significant impact on Oracle database performance, so the initial AME implementation was not certified for use with the Oracle database. When AME is enabled today, AIX always uses 4k page size instead of 64k page size for the Oracle database. Starting in AIX 7.2 TL1 or newer AIX supports 64K pages using a hardware compression engine. This is the what is currently being certified for use with Oracle database.


Virtual processor folding

This is a feature of Power Systems in which unused virtual processors are taken offline until the demand requires that they be activated. The default is to allow virtual processor folding, and this should not be altered without consulting AIX support. (schedo parameter vpm_fold_policy=2).

For Oracle database environments it is strongly suggested to set schedo parameter vpm_xvcpus to a value of 2 (schedo -p -o vpm_xvcpus=2) as we have seen AIX incorrectly folding too many processors if the parameter is left at default of 0. It is dynamic, not requiring reboot. This is a critical setting in a RAC environment when using LPARs with processor folding enabled. If this setting is not adjusted, there is a high risk of RAC node evictions under light database workload conditions.

This setting says that a minimum of 2 additional vp's will be online (e.g. not folded / disabled) at all times. With a shared processor systems using RAC, the minimum recommended value for vpm_vxcpus is 2, meaning there will be a minimum of 3 unfolded CPUs (the default 1 plus the 2 additional ones). This is recommended to avoid RAC reboot issues. A resource issue can be created when one Oracle process enters a tight loop polling on a fd and the Oracle process that is supposed to send to that fd does not get scheduled. Once that sending event occurs, things go back to normal and AIX housekeeping can run also.



These values are generally suggested for Oracle, and can be considered as starting points. Pls note all udp settings are specific for RAC, the RAC interconnect uses UDP for interprocess communications:
sb_max >= 1MB (1048576) and must be greater than maximum tpc or udp send or recvspace
tcp_sendspace = 262144
tcp_recvspace = 262144
udp_sendspace = db_block_size * db_file_multiblock_read_count
udp_recvspace= 10 * (udp_sendspace)
rfc1323 = 1 (see Recent suggestions and open issues)
tcp_fastlo = 1. This is new in AIX 7.1 (no –p –o tcp_fastlo=1). The tcp_fastlo default setting is off or ‘0’. (test it first)

Ephemerals (non-defaults suggested for a large number of connecting hosts or a high degree of parallel query; also to avoid install-time warnings)

Some additional consideration for RAC network as part of the 10 GigE:
LACP timeout: Use the “long timeout” switch setting for the amount of time to wait before sending LACPDUs.
Flow control: Enable flow control at the switch port and on the server side ports (using HMC) for the 10GE adapter or 10GE HEA configuration.
UDP tuning: Tune the udp_sendspace and udp_recvspace until there are no “socket buffer overflows” in netstat -s
Jumbo frames: Enable Jumbo frames on every hop (server side, switch side)

MTU adapter port specific settings will be overridden with setting ‘mtu_bypass = ON’. This is complemented with ‘tcp_pmtu_discover = 1’ for MTU path discovery.

Network tunables (with command):
# no -p -o udp_sendspace=262144; no -p -o udp_recvspace=655360; no -p -o tcp_sendspace=262144; no -p -o tcp_recvspace=262144
# no -p -o rfc1323=1; no -p -o sb_max=4194304; no -r -o ipqmaxlen=512 #(needs reboot); no -p -o use_isno=1

for each active network interface (i.e. en0, en1, en2 .. etc.):
# chdev -l enX -a state='up' -a rfc1323='1' -a tcp_mssdflt='1448' -a tcp_nodelay='1' -a tcp_recvspace='262144' -a tcp_sendspace='262144'


Oracle DB parameters

Specifies the default size of Oracle database blocks. This parameter cannot be changed after the database has been created, so it is vital that the correct value is chosen at the beginning. Optimal
DB_BLOCK_SIZE values vary depending on the application. (Typical values are 8KB for OLTP workloads and 16KB to 32KB for DSS workloads. If you plan to use a 2KB DB_BLOCK_SIZE with JFS2
file systems, be sure to create the file system with agblksize=2048.)

The primary purpose of the DB buffer cache area(s) is to cache frequently used data (or index) blocks in memory in order to avoid or reduce physical I/Os to disk. In general, you want just enough DB buffer cache allocated to achieve the optimum buffer cache hit rate. Increasing the size of the buffer cache beyond this point may actually degrade performance due to increased overhead of managing the larger cache memory area.

AIX fully supports Asynchronous I/O for file systems (JFS, JFS2, GPFS and Veritas) as well as raw devices. This parameter should always be set to TRUE (the default value).

Setting the FILESYSTEMIO_OPTIONS parameter in the server parameter file to SETALL or DIRECTIO, enables CIO on JFS2 and DIO on JFS for all data file IO. Because the DIRECTIO setting disables asynchronous IO it should normally not be used. As a result of this 12c feature, you can place data files on the same JFS/JFS2 file system as the Oracle home directory and still use DIO or CIO for improved performance. (You should still place Oracle Database logs on a separate JFS2 file system for optimal performance.)

These parameters specify how many database writer processes are used to update the database disks when disk block buffers in database buffer cache are modified. Multiple database writers are often used to get around the lack of Asynchronous I/O capabilities in some operating systems, although it still works with operating systems that fully support Asynchronous I/O, such as AIX.
Normally, the default values for these parameters are acceptable and should only be overridden in order to address very specific performance issues and/or at the recommendation of Oracle Support.

An appropriate value for the SHARED_POOL_SIZE parameter is very hard to determine before statistics are gathered about the actual use of the shared pool. The good news is that starting with Oracle 9i, it is dynamic, and the upper limit of shared pool size is controlled by the SGA_MAX_SIZE parameter. So, if you set the SHARED_POOL_SIZE to an initial value and you determine later that this value is too low; you can change it to a higher one, up to the limit of SGA_MAX_SIZE. Remember that the shared pool includes the data dictionary cache (the tables about the tables and indexes), the library cache (the SQL statements and execution plans), and also the session data if the shared server is used. Thus, it is not difficult to run out of space. Its size can vary from a few MB to very large, such as 20 GB or more, depending on the applications’ use of SQL statements. It depends mostly on the number of tables in the databases; the data dictionary will be larger for a lot of tables and the number of the different SQL statements that are active or used regularly.

Starting with Oracle 9i, the Oracle SGA size can be dynamically changed. It means the DBA just needs to set the maximum amount of memory available to Oracle (SGA_MAX_SIZE) and the initial values of the different pools: DB_CACHE_SIZE, SHARED_POOL_SIZE, LARGE_POOL_SIZE etc… The size of these individual pools can then be increased or decreased dynamically using the ALTER SYSTEM
command, provided the total amount of memory used by the pools does not exceed SGA_MAX_SIZE. If LOCK_SGA = TRUE, his parameter defines the amount of memory Oracle allocates at DB startup in “one piece”! Also, SGA_TARGET is ignored for the purpose of memory allocation in this case.

SGA_TARGET specifies the total size of all SGA components. If SGA_TARGET is specified, then the following memory pools are automatically sized: Buffer cache (DB_CACHE_SIZE), Shared pool (SHARED_POOL_SIZE), Large pool (LARGE_POOL_SIZE), Java pool (JAVA_POOL_SIZE), Streams pool (STREAMS_POOL_SIZE)

MEMORY_TARGET specifies the Oracle system-wide usable memory. The database tunes memory to the MEMORY_TARGET value, reducing or enlarging the SGA and PGA as needed. If MEMORY_TARGET parameter is used, memory cannot be pinned. It is not recommended to use the MEMORY_TARGET parameter together with the SGA_MAX_SIZE and SGA_TARGET.

PGA_AGGREGATE_TARGET specifies the target aggregate PGA memory available to all server processes attached to the instance. Setting PGA_AGGREGATE_TARGET to a nonzero value has the effect of
automatically setting the WORKAREA_SIZE_POLICY parameter to AUTO. This means that SQL working areas used by memory-intensive SQL operators (such as sort, group-by, hash-join, bitmap merge, and bitmap create) will be automatically sized. A nonzero value for this parameter is the default since, unless you specify otherwise, Oracle sets it to 20% of the SGA or 10 MB, whichever is greater.

The setting for 12.1 defaults to ‘TRUE’ which allocates all segments to the maximum. Prior to 12.1 the default was set to ‘FALSE’. With setting this to true, all segments are allocated to the MAXIMUM. PRE_PAGE_SGA (at startup) will read “touching” all the memory pages. This can result in slower start up times but advantage is that all further requests to SGA memory are supposed to hit real physical memory and AIX will not need to do any additional allocations after startup.

It now takes more time for ANY ORACLE process to start as this “touching” of memory segments which is not done just during instance startup but also occurs for any new ORACLE process (i.e. a new database connection shadow process). The efficiency of this "touching" will depend on the page size used for the SGA. For example, an 80MB SGA using 64KB pages would need to "touch" 1250  pages whereas an SGA using 16MB pages would only need to "touch" 5 pages. To pin memory you set lock_sga to 'TRUE'. To use 16M pages one also needs to pin memory. If persistent memory usage issues are encountered overriding the default of pre_page_sga of ‘TRUE’ and setting it to ‘FALSE’ may be beneficial.
If you are planning to use the “In-Memory” feature of Oracle database 12c, 18c or 19c it is recommended to set the pre_page_sga = TRUE (default)

Adaptive Features
It has been found helpful to test turning this feature off to eliminate it as a cause of performance related issues.



Anonymous said...

This is really amazing, Good to know these details as a System Admin

aix said...


Agucho said...

Very useful information. Tuneable network parameter changes require reboot? for example: no -p -o tcp_sendspace = 262144 or no -p -o tcp_recvspace = 262144