ibm power systems technical university october 10-14 ... · pdf fileagenda cpu power 7 memory...

57
Materials may not be reproduced in whole or in part without the prior written permission of IBM. 2011 IBM Power Systems Technical University October 10-14 | Fontainebleau Miami Beach | Miami, FL IBM Title: Running Oracle DB on IBM AIX and Power7 Best Pratices for Performance & Tuning Session: 17.11.2011, 16:00-16:45, Raum Krakau Speaker: Frank Kraemer Frank Kraemer IBM Systems Architect mailto:[email protected] Francois Martin IBM Oracle Center mailto:[email protected]

Upload: buique

Post on 17-Mar-2018

228 views

Category:

Documents


1 download

TRANSCRIPT

Materials may not be reproduced in whole or in part without the prior written permission of IBM.

2011IBM Power Systems Technical UniversityOctober 10-14 | Fontainebleau Miami Beach | Miami, FL

IBMTitle: Running Oracle DB on IBM AIX and Power7

Best Pratices for Performance & Tuning

Session: 17.11.2011, 16:00-16:45, Raum Krakau Speaker: Frank Kraemer

Frank KraemerIBM Systems Architectmailto:[email protected]

Francois MartinIBM Oracle Centermailto:[email protected]

Deutsche Oracle Anwenderkonferenz DOAG 2011

2© Copyright IBM Corporation 2011

Agenda

CPUPower 7

MemoryAIX VMM tuning

IO Storage considerationAIX LVM StripingDisk/Fiber Channel driver optimizationVirtual Disk/Fiber channel driver optimizationAIX mount optionAsynchronous IO

NUMA Optimization

Deutsche Oracle Anwenderkonferenz DOAG 2011

3© Copyright IBM Corporation 2011

IBM Power 7 (Socket/Chip/Core/Threads)

chipEach Power7 socket = 1 Power7Chip

Each Power7 Chip =

8 x Power74Ghz Cores

SMTEach Power7 Core =

4 HW SMT threads

lcpulcpu In AIX, when SMT is enable, each SMT thread is seen like a logical CPU

32 sockets = 32 chips = 256 cores = 1024 SMT threads = 1024 AIX logical CPUs

Pow

er H

ardw

are

Softw

are

SMT

SMT

SMT

lcpulcpu

Presenter
Presentation Notes
P795 is based on MCM (Quad chip module) (each socket is 4*8cores=32 cores) so it is an 8 sockets server for 256 cores

Deutsche Oracle Anwenderkonferenz DOAG 2011

Scalability with IBM POWER Systems and Oracle

4© Copyright IBM Corporation 20114

Power 770

Power 750

Power 780

Blade Server

Power 740 2S/4UPower 720 1S/4U

Power 795

Power 730 2S/2UPower 710 1S/2U

Oracle Certifies for the O/S&

All Power servers run the same AIX=

Complete flexibility for Oracle workload deployment

Consistency:Binary compatibilityMainframe-inspired reliabilitySupport for full HW based virtualizationAIX, Linux and IBM i OS

Deutsche Oracle Anwenderkonferenz DOAG 2011

POWER7 Server RAS Features

5© Copyright IBM Corporation 2011

VirtualizationPowerVM is core firmwareThin bare metal HypervisorDevice driver free Hypervisor HW enforced virtualizationIBM Integrated stackDynamic LPAR operationsRedundant VIOS supportConcurrent remote supportLive Partition Mobility (LPM)

GeneralFirst Failure Data Capture Hot-node add/repairConcurrent firmware updatesRedundant clocks & service processorsDynamic clock failoverECC on fabric bussesLight path diagnosticsElectronic service agent

CPUDynamic CPU deallocationDynamic processor sparingProcessor instruction retry & alternate processor recoveryPartition availability priorityProcessor contained checkstopUncorrectable error handlingCPU CoD

Memory/CacheRedundant bit steeringDynamic page deallocationError checking channels & spare signal linesMemory protect keys (AIX)Dynamic cache deallocation& cache line deleteHW memory & L3 scrubbingMemory CoD

I/OIntegrated I/O product development - stable driversRedundant I/O drawer linksIndependent PCIe bussesPCIe Bus Error RecoveryHot add GX bus adapter I/O drawer hot add / concurrent repairHot swap disk, media, & PCIeadapters

AIX v6/v7LVM and JFS2SMIT – reduce human errorsIntegrated HW / OS / FW error reportingHot AIX kernel patchesWPAR and WPAR mobilityConfigurable error logs Resource monitor & controlEAL 4+ security certification

PowerVM Hypervisor

VirtualI/O

Server1

LPAR

VirtualI/O

Server2

LPAR

OS - #1

LPAR

OS - #N

LPAR

DiskGeneral MemoryCPU Network

Deutsche Oracle Anwenderkonferenz DOAG 2011

6© Copyright IBM Corporation 2011

POWER7 SMT performance

• Requires POWER7 Mode– POWER6 Mode supports SMT1 and SMT2

• Operating System Support– AIX 6.1 and AIX 7.1– IBM i 6.1 and 7.1– Linux

• Dynamic Runtime SMT scheduling– Spread work among cores to execute in appropriate threaded mode

– Can dynamical shift between modes as required: SMT1 / SMT2 / SMT4

• LPAR-wide SMT controls– ST, SMT2, SMT4 modes– smtctl / mpstat commands

• Mixed SMT modes supported within same LPAR– Requires use of “Resource Groups”

Presenter
Presentation Notes
Dynamic Runtime SMT scheduling: the first thread of each core is dispatch until there are physical cores or Virtual processors free. No matter if shedo Folding option is enabled (CPU Folding activated by default) Requires use of “Resource Groups” reference to WPARs

Deutsche Oracle Anwenderkonferenz DOAG 2011

7© Copyright IBM Corporation 2011

POWER7 specific tuning

• Use SMT4Gives a cpu boost performance to handle more concurrent threads in parallel

• Disabling HW prefetching.Usually improve performance on Database Workload on big SMP Power system

# dscrctl –n –b –s 1 (this will dynamically disable HW memory prefetch and keep this configuration across reboot) # dscrctl –n –b –s 0 (to reset HW prefetching to default value)

• Use Terabyte Segment aliasingImprove CPU performance by reducing SLB miss (segment address resolution)# vmo –p –o esid_allocator=1 (by default on AIX7)

• Use Large Pages (LP) 16MB memory pagesImprove CPU performance by reducing TLB miss (Page address resolution)# vmo -r -o lgpg_regions=xxxx -o lgpg_size=16777216 (xxxx= # of segments of 16M you want) (check large page usage with smon –mwU oracle)# vmo –p –o v_pinshm = 1

Enable Oracle userid to use Large Pages# chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracleexport ORACLE_SGA_PGSZ=16M before starting oracle (with oracle user)

Presenter
Presentation Notes
Use of Terabyte segment avoid miss when looking to the SLB (Segment Lookaside Buffer) (on Power7 there are 12 memory segments free for applications, so miss SLB appears if more than 3GB memory allocation (per process ?) AIX supports Application segments that are TeraByte (TB) in size starting from 61L/710 and prior to that supported only segment of size 256MB. With the TB segment support we nearly eliminate a huge amount of SLB misses (and therefore SLB reload overhead in the kernel). Applications using a large amount of shared memory segments (it was 270GB leading to over 1300 256MB segments in a customer benchmark will incur a lot of SLB hardware faults as data referenced is scattered across all of these segments. This problem is alleviated with TB segment support.  

Deutsche Oracle Anwenderkonferenz DOAG 2011

8© Copyright IBM Corporation 2011

• Objective :Tune the VMM to protect computational pages (Programs, SGA, PGA) from being paged out and force the LRUD to steal pages from FS-CACHE only.

Kernel

Programs

SGA + PGA

FS CACHE

SGA + PGA

MEMORY PAGING SPACE

ACTION

2 - DB Activity

1 - System Startup

Performance degradation!

3 - => Paging Activity

1 - AIX is started, applications load some computational pages into the memory.As a UNIX system, AIX will try to take advantage of the free memory by using it as a cache file to reduce the IO on the physical drives.

2 - The activity is increasing, the DB needs more memory but there is no free pages available. LRUD (AIX page stealer) is starting to free some pages into the memory.

3 - With the default setting, LRUD will page out some computational pages instead of removing only pages from the File System Cache.

Virtual Memory Management

(needs more memory)

Presenter
Presentation Notes
By default in AIX7

Deutsche Oracle Anwenderkonferenz DOAG 2011

9© Copyright IBM Corporation 2011

Memory : VMM Tuning for Filesystems (lru_file_repage)

This VMM tuning tip is applicable for AIX 5.2 ML4+ or AIX 5.3 ML1+ or AIX 6.1+– lru_file_repage :

• This new parametter prevents the computational pages to be paged out.• By setting lru_file_repage=0 (1 is the default value) you’re telling the VMM that

your preference is to steal pages from FS Cache only.

– “New” vmo tuning approach : Recommendations based on VMM developer experience.Use “vmo –p –o” to set the following parametters :

lru_file_repage = 0page_steal_method = 1 (reduce the scan:free ratio)

minfree = (maximum of either 960 or 120 x # lcpus*) / # memory pools** maxfree = minfree +(j2_maxPageAhead*** x # lcpus*) / # memory pools*** to see # lcpus, use “bindprocessor –q” ** for # memory pools, use “vmstat –v”

*** to see j2_maxPageReadAhead, use “ioo –L j2_maxPageReadAhead”Increase minfree and recalculate maxfree if “Free Frame Waits” increase over time (vmstat –s)

minperm% = from 3 to 10maxperm% = maxclient% = from 70 to 90

strict_maxclient = 1

Presenter
Presentation Notes
Memory Pools ?

Deutsche Oracle Anwenderkonferenz DOAG 2011

10© Copyright IBM Corporation 2011

Memory : Use jointly AIX dynamic LPAR and Oracle dynamic allocation of memory + AMM

Scenario :

Memory_max_size= 18 GB

Real memory= 12 GB

Memory_target= 8 GB

SGA+

PGA

Oracle tuning advisor indicates that SGA+PGA need to be increased to 11GB:memory_target can be increased dynamically to 11GB but real memory is only 12GB, so it needs to be increased as well.

– Step 1 - Increase physical memory allocated to the LPAR to 15 GB

– Step 2 - Increase SGA+PGA allocated to the instance of the database to 11GB

AIX + free

Memory_max_size= 18 GB

Initial configuration Final configuration

Memory allocated to the system has been increased dynamically, using AIX DLPARMemory allocated to Oracle (SGA and PGA) has been increased on the fly

ssh hscroot@hmc "chhwres -r mem -m <system> -o a -p <LPAR name> -q 3072"

alter system set memory_target=11GB;

Real memory= 15 GB

AIX + free

Memory_target= 11 GB

SGA+

PGA

Deutsche Oracle Anwenderkonferenz DOAG 2011

11© Copyright IBM Corporation 2011

IO : Database Layout

Stripe and mirror everything (S.A.M.E) approach:

Goal is to balance I/O activity across all disks, loops, adapters, etc...

Avoid/Eliminate I/O hotspots

Manual file-by-file data placement is time consuming, resource intensive and iterative

Additional advices to implement SAME :• apply the SAME strategy to data, indexes • if possible separate redologs (+archivelogs)

Having a good Storage configuration is a key point :

Because disk is the slowest part of an infrastructure

Reconfiguration can be difficult and time consuming

Deutsche Oracle Anwenderkonferenz DOAG 2011

12© Copyright IBM Corporation 2011

RAID-5 vs. RAID-10 Performance Comparison

I/O Profile RAID-5 RAID-10Sequential Read Excellent Excellent

Sequential Write Excellent Good

Random Read Excellent ExcellentRandom Write Fair Excellent

IO : RAID Policy with ESS, DS6/8K

With Enterprise class storage (with huge cache), RAID-5 performances are comparable to RAID-10 (for most customer workloads)

Consider RAID-10 for workloads with a high percentage of random write activity (> 25%) and high I/O access densities (peak > 50%)

Use RAID-5 or RAID-10 to create striped LUNs

If possible try to minimize the number of LUNs per RAID array to avoid contention on physical disk.

HW Striping

LUN 1

LUN 2

LUN 3

LUN 4

Storage

Deutsche Oracle Anwenderkonferenz DOAG 2011

13© Copyright IBM Corporation 2011

IO : 2nd Striping (LVM)

1. Luns are striped across physical disks (stripe-size of the physical RAID : ~ 64k, 128k, 256k)

2. LUNs are seen as hdisk device on AIX server.

3. Create AIX Volume Group(s) (VG) with LUNs from multiple arrays

4. Logical Volume striped across hdisks (stripe-size : 8M, 16M, 32M, 64M)

=> each read/write access to the LV are well balanced accross LUNs and use the maximum number of physical disks for best performance.

HW Striping

LUN 1

LUN 2

LUN 3

LUN 4

StorageAIX

hdisk1

hdisk2

hdisk3

hdisk4

LVM

Str

ipin

gVolume Group

LV (r

aw o

r with

jfs2

)

Deutsche Oracle Anwenderkonferenz DOAG 2011

14© Copyright IBM Corporation 2011

IO : LVM Spreading / Striping

Two different ways to stripe on the LVM layer :1. Stripe using Logical Volume (LV)

– Create Logical Volume with the striping option : mklv –S <stripe-size> ...Valid LV Stripe sizes:

• AIX 5.2: 4k, 8k, 16k, 32k, 64k, 128k, 256k, 512k, 1M• AIX 5.3+: AIX 5.2 Stripe sizes + 2M, 4M, 16M, 32M, 64M, 128M

2. PP (Disk Physical Partition) striping (AKA spreading)– Create a Volume Group with a 8M,16M or 32M PPsize. (PPsize will be the

“Stripe size”)• AIX 5.2 : Choose a Big VG and change the “t factor” : # mkvg –B –t <factor> –s

<PPsize>...• AIX 5.3 +: Choose a Scalable Volume Group : # mkvg –S –s <PPsize> ...

– Create LV with “Maximum range of physical volume” option to spread PP on different hdisk in a Round Robin fashion : # mklv –e x ...

jfs2 filesystem creation advice :If you create a jfs2 filesystem on a striped (or PP spreaded) LV, use the INLINE logging option. It will avoid « hot spot » by creating the log inside the filesystem (which is striped) instead of using a uniq PP stored on 1 hdisk.

# crfs –a logname=INLINE …

Presenter
Presentation Notes
Same for ASM, 1MB stripe size, can be changed from 11g

Deutsche Oracle Anwenderkonferenz DOAG 2011

15© Copyright IBM Corporation 2011

IO : Disk Subsystem Advices

Check if the definition of your disk subsystem is present in the ODM.

If the description shown in the output of “lsdev –Cc disk” the word “Other”, then it means that AIX doesn’t have a correct definition of your disk device in the ODM and use a generic device definition.# lsdev –Cc disk

In general, a generic device definition provides far from optimal performance since it doesn’t properly customize the hdisk device :

exemple : hdisk are created with a queue_depth=1

1. Contact your vendor or go to their web site to download the correct ODM definition for your storage subsystem. It will setup properly the “hdisk” accordingly to your hardware for optimal performance.

2. If AIX is connected to the storage subsystem with several Fiber Channel Cards for performance, don’t forget to install a multipath device driver or path control module.

- sdd or sddpcm for IBM DS6000/DS8000

- powerpath for EMC disk subsystem

- hdlm for Hitachi etc....

Generic device definitionbad performance

Deutsche Oracle Anwenderkonferenz DOAG 2011

16© Copyright IBM Corporation 2011

IO : AIX IO tuning (1) – LVM Physical buffers (pbuf)

hdisk1

hdisk2

hdisk3

hdisk4

LVM

Str

ipin

g

Volume Group (LVM)LV

(raw

or w

ith jf

s2)

Pbuf

Pbuf

Pbuf

Pbuf

AIX

1. Each LVM physical volume as a physical buffer (pbuf)

2. #vmstat –v command help you to detect lack of pbuf.

3. If there is a lack of pbuf, 2 solutions:- Add more luns (this will add more pbuf)- Or increase pbuf size :

lvmo –v <vg_name> -o pv_pbuf_count=XXXoutput “vmstat –v” command

Deutsche Oracle Anwenderkonferenz DOAG 2011

17© Copyright IBM Corporation 2011

IO : AIX IO tuning (2) – hdisk queue_depth (qdepth)

hdisk1

hdisk2

hdisk3

hdisk4

LVM

Str

ipin

g

Volume Group (LVM)LV

(raw

or w

ith jf

s2)

Pbuf

Pbuf

Pbuf

Pbuf

queue

queue

queue

queue

AIX

1. Each AIX hdisk has a “Queue” called queue depth. This parameter set the number of // queries that can be send to Physical disk.

2. To know if you have to increase qdepth, use nmon (DDD in interactive mode, -d in recording) and monitor : service time, wait time and servQ full

1. If you have :service time < 2-3ms => this mean that Storage behave well (can handle more load)And “wait time” > 1ms => this mean that disk queue are full, IO wait to be queued=> INCREASE hdisk queue depth (chdev –l hdiskXX –a queue_depth=YYY)

Wait timequeue

service timeApp. IO

Queue_depth

nmon DDD output

1.9 ms 0.2 ms

Deutsche Oracle Anwenderkonferenz DOAG 2011

18© Copyright IBM Corporation 2011

IO : AIX IO tuning (3) – HBA tuning (num_cmd_elems)

hdisk1

hdisk2

hdisk3

hdisk4

LVM

Str

ipin

g

Volume Group (LVM)LV

(raw

or w

ith jf

s2)

Pbuf

Pbuf

Pbuf

Pbuf

queue

queue

queue

queue

fcs0

fcs1

MP

IO s

ddpc

m

queue

queue

AIX

1. Each HBA fc adapter has a queue “nb_cmd_elems”. This queue has the same role for the HBS as the qdepth for the disk.

2. Rule of thumb: nb_cmd_elems= (sum of qdepth) / nb HBA

3. Changing nb_cmd_elems : chdev –l fcsX –o nb_cmd_elems=YYYYou can also change the max_xfer_size=0x200000 and lg_term_dma=0x800000 with the same command

These changes use more memory and must be made with caution, check first with “fcstat fcsX”:

External Storage

Queue_depth

Nb_cmd_elems

fcstat fcs0 output

Deutsche Oracle Anwenderkonferenz DOAG 2011

19© Copyright IBM Corporation 2011

Virtual SCSI

LinuxAIX 5LV5.3

LinuxAIX 5LV6.1

Micro-partitionsPOWER Server

VIOS

POWER Hypervisor

External Storage

VLAN

VSCSI

SharedFiber Chan

Adapter

SharedSCSI

Adapter

Virtual SCSI

A3B1 B2 B3

A1 A2

Multiple LPARs can use same or different physical disk

Configure as logical volume on VIOSAppear a hdisk on the micro-partitionCan assign entire hdisk to a single client

VIOS owns physical disk resourcesLVM based storage on VIO ServerPhysical Storage can be SCSI or FC

Local or remote

Micro-partition sees disks as vSCSI (Virtual SCSI) devices

Virtual SCSI devices added to partition via HMCLUNs on VIOS accessed as vSCSI diskVIOS must be active for client to boot

A1

B1 B2B3

A2 A3

B4 B5

A4A5

Virtual I/O helps reduce hardware costs by sharing disk drives

Deutsche Oracle Anwenderkonferenz DOAG 2011

20© Copyright IBM Corporation 2011

AIX VIOS

VIOS : VSCSI IO tuning

pHyp

fcs0queuehdisk qdepthvhost0hdisk qdepth vscsi0

MP

IO s

ddpc

m

AIX

MP

IO

# lsdev –Cc diskhdisk0 Available Virtual SCSI Disk Drive

Storage driver cannot be installed on the lpar⇒Default qdepth=3 !!! Bad performance

⇒chdev –l hdisk0 –a queue_depth=20+ monitor svctime/wait time with nmon to adjust queue depth

* You have to set same queue_depth for the source hdisk on the VIOS

HA: (dual vios conf.)change vscsi_err_recov to fast_failchdev –l vscsi0 –a vscsi_err_recov=fast_fail

Performance:No perf tuning can be made;

We just know that each vscsi can handle 512 cmd_elems. (2 are reserved for the adapter and 3 reserved for each vdisk)

So, use the following formula to find the number of disk you can attache behind a vscsi adapter.

Nb_luns= ( 512 – 2 ) / ( Q + 3 )With Q=qdepth of each diskIf more disks are needed => add vscsi

Install Storage Subsystem Driver on the VIO

Monitor VIO FC activity with nmon(interactive: press “a” or “^”)(reccord with “-^” option)

Adapt num_cmd_elems accordingly with sum of qdepth.check with: fcstat fcsX

Deutsche Oracle Anwenderkonferenz DOAG 2011

21© Copyright IBM Corporation 2011

N-Port ID Virtualization

LinuxAIX5.3

LinuxAIX6.1

VIOS

POWER HypervisorSAN Storage

8Gb PCIeFiber Chan

Adapter

►LPARs have direct visibility on SAN (Zoning/Masking)

►I/O Virtualization configuration effort is dramatically reduced

►NPIV does not eliminate the need to virtualize PCI-family

►Tape Library Support

VIOS Fiber Channel adapter supports Multiple World Wide Port Names / Source Identifiers

Physical adapter appears as multiple virtual adapters to SAN / end-point device

Virtual adapter can be assigned to multiple operating systems sharing the physical adapter

Tape Library

FiberChanSwitch

Deutsche Oracle Anwenderkonferenz DOAG 2011

22© Copyright IBM Corporation 2011

DS8000 EMC

VIOS

Virtual SCSI

FC Adapters

SAN

generic scsi disk

generic scsi disk

Virtual SCSI model

Virt

ualiz

eddi

sks

FC A

dapt

er

EMCDS8000

VIOS

Virtual FC

FC Adapters

SAN

EMCDS8000

N-Port ID Virtualization

Shar

edFC

Ada

pter

POWER6 or POWER7POWER5 or POWER6

NPIV Simplifies SAN Management

AIX AIX

Dis

ks

Deutsche Oracle Anwenderkonferenz DOAG 2011

23© Copyright IBM Corporation 2011

AIX VIOS

VIOS: NPIV IO tuning

pHyp

fcs0queuevfchost0hdisk qdepth

MP

IO s

ddpc

m

fcs0queue

# lsdev –Cc diskhdisk0 Available MPIO FC 2145

Storage driver must be installed on the lpar⇒Default qdepth is set by the drivers

⇒Monitor “svctime” / “wait time” with nmon or iostat to tune the queue depth

HA: (dual vios conf.)Change the following parameters:chdev –l fscsi0 –a fc_err_recov=fast_failchdev –l fscsi0 –a dyntrk=yes

Performance:Monitor fc activity with nmon(interactive: option “a” or “^”)(reccording : option “-^”)

Adapt num_cmd_elemsCheck fcstat fcsX

HA:Change the following parameters:chdev –l fscsi0 –a fc_err_recov=fast_failchdev –l fscsi0 –a dyntrk=yes

Performance:Monitor fc activity with nmon(interactive: option “^” only)(reccording : option “-^”)

Adapt num_cmd_elemsCheck fcstat fcsX

⇒Should be = Sum of vfcs num_cmd_elems connected to the backend device

wwpnwwpn

Deutsche Oracle Anwenderkonferenz DOAG 2011

24© Copyright IBM Corporation 2011

IO : Filesystems Mount Options (DIO, CIO)

Writes with a normal file

system

Writes with Direct IO (DIO)

Writes with Concurrent IO

(CIO)

Database Buffer OS File System

FS Cache

Deutsche Oracle Anwenderkonferenz DOAG 2011

25© Copyright IBM Corporation 2011

IO : Filesystems Mount Options (DIO, CIO)

If Oracle data are stored in a Filesystem, some mount option can improve performance :

Direct IO (DIO) – introduced in AIX 4.3.

• Data is transfered directly from the disk to the application buffer, bypassing the file buffer cache hence avoiding double caching (filesystem cache + Oracle SGA). • Emulates a raw-device implementation.

To mount a filesystem in DIO$ mount –o dio /data

Concurrent IO (CIO) – introduced with jfs2 in AIX 5.2 ML1

• Implicit use of DIO. • No Inode locking : Multiple threads can perform reads and writes on the same file at the same time. • Performance achieved using CIO is comparable to raw-devices.

To mount a filesystem in CIO:$ mount –o cio /data

Bench throughput over run duration – higher tps indicates better performance.

Presenter
Presentation Notes
CIO: JFS2 uses a read-shared, write exclusive inode lock (multiple readers can access simult. the file but when write access is made, a lock in exclusive mode must be held) write serialization at the file level. Oracle, for example, implements its own data serialization which ensure that dara inconsistencies don’t occur. So, it doesn’t need the file-system to implement this serialization. For such applications AIX 5.2 offers CIO. With CIO, multiple threads can simult. perform reads and writes on a shared file.

Deutsche Oracle Anwenderkonferenz DOAG 2011

26© Copyright IBM Corporation 2011

Benefits :1. Avoid double caching : Some data are already cache in the Application layer (SGA)

2. Give a faster access to the backend disk and reduce the CPU utilization

3. Disable the inode-lock to allow several threads to read and write the same file (CIO only)

IO : Benefits of CIO for Oracle

Restrictions :1. Because data transfer is bypassing AIX buffer cache, jfs2 prefetching and write-behind can’t be

used. These functionnalities can be handled by Oracle.

⇒ (Oracle parameter) db_file_multiblock_read_count = 8, 16, 32, ... , 128 according to workload

2. When using DIO/CIO, IO requests made by Oracle must by aligned with the jfs2 blocksize to avoid a demoted IO (Return to normal IO after a Direct IO Failure)

=> When you create a JFS2, use the “mkfs –o agblksize=XXX” Option to adapt the FS blocksize with the application needs.

Rule : IO request = n x agblksize

Exemples: if DB blocksize > 4k ; then jfs2 agblksize=4096

Redolog are always written in 512B block; So jfs2 agblksize must be 512

Presenter
Presentation Notes
Do not allocate a JFS with the large file enabled (bf) attribute. The big file attribute increases the minimum DIO transfer size from 4K to 128K, forcing Oracle to read and write a minimum of 128K bytes to exploit DIO. The CIO mount option should only be used for filesystems containing data which is intended to be accessed in concurrent mode, such as Oracle datafiles, online redo logs and control files, and should not contain libraries and executables, such as the filesystem containing the $ORACLE_HOME. 512 = OS physical block size

Deutsche Oracle Anwenderkonferenz DOAG 2011

27© Copyright IBM Corporation 2011

IO : Direct IO demoted – Step 1

Application

AIX Virt. Memory

AIX jfs2 Filesystem

4096

CIO

512

CIO write failedbecause IO is not aligned with FS blksize

Deutsche Oracle Anwenderkonferenz DOAG 2011

28© Copyright IBM Corporation 2011

IO : Direct IO demoted – Step 2

Application

AIX Virt. Memory

AIX jfs2 Filesystem

4096

512Aix kernel read 1 FSblockfrom the disk

Deutsche Oracle Anwenderkonferenz DOAG 2011

29© Copyright IBM Corporation 2011

IO : Direct IO demoted – Step 3

Application

AIX Virt. Memory

AIX jfs2 Filesystem

4096

512

Application write 512B blockin the memory

Deutsche Oracle Anwenderkonferenz DOAG 2011

30© Copyright IBM Corporation 2011

IO : Direct IO demoted – Step 4

Application

AIX Virt. Memory

AIX jfs2 Filesystem

40964096

Aix kernel write the 4k page to the disk

Because of the CIO access,the 4k page is remove from memory

To Conclued : demoted io will consume :- more CPU time (kernel)- more physical IO : 1 io write = 1 phys io read + 1 phys io write

Deutsche Oracle Anwenderkonferenz DOAG 2011

31© Copyright IBM Corporation 2011

IO : Direct IO demoted

How to detect demoted IO :

Trace command to check demoted io :# trace –aj 59B,59C ; sleep 2 ; trcstop ; trcrpt –o directio.trcrpt# grep –i demoted directio.trcrpt

Waits on redolog (with demoted IO, FS blk=4k)

Waits% Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn log file sync 2,229,324 0.00 62,628 28 1.53

Waits on redolog (without demoted IO, FS blk=512)

Waits% Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn log file sync 494,905 0.00 1,073 2 1.00

Extract from Oracle AWR (test made in Montpellier with Oracle 10g)

Deutsche Oracle Anwenderkonferenz DOAG 2011

32© Copyright IBM Corporation 2011

FS mount options advices

With Optimized mount options

With standard mount options

mount –o noatimeCached by AIX (fscache)noatime reduce inode modification on read

mount –o rwCached by AIX (fscache)Oracle

binaries

mount –o noatime,cioCached by Oracle (SGA)

mount –o rwCached by AIX (fscache)Cached by Oracle (SGA)

Oracle Datafile

mount –o noatime,cio jfs2 agblksize=512 (to avoid io demotion)Cached by Oracle (SGA)

mount –o rwCached by AIX (fscache)Cached by Oracle (SGA)

Oracle Redolog

mount –o noatime,rbrwUse jfs2 write-behind, but memory is released after write.

mount –o rwCached by AIX (fscache)

Oracle Archivelog

mount –o noatimeCached by AIX (fscache)

mount –o rwCached by AIX (fscache)

Oracle Control files

Deutsche Oracle Anwenderkonferenz DOAG 2011

33© Copyright IBM Corporation 2011

• Allows multiple requests to be sent without to have to wait until the disk subsystem has completed the physical IO.

• Utilization of asynchronous IO is strongly advised whatever the type of file-system and mount option implemented (JFS, JFS2, CIO, DIO).

IO : Asynchronous IO (AIO)

Posix vs Legacy

Since AIX5L V5.3, two types of AIO are now available : Legacy and Posix. For the moment, the Oracle code is using the Legacy AIO servers.

aioQ

Application aioservers Disk

1

2

3 4

Presenter
Presentation Notes
Use legacy aio. DBWR_IO_SLAVES is relevant only on systems with only one database writer process (DBW0). It specifies the number of I/O server processes used by the DBW0 process. The DBW0 process and its server processes always write to disk. By default, the value is 0 and I/O server processes are not used. If you set DBWR_IO_SLAVES to a nonzero value, the number of I/O server processes used by the ARCH and LGWR processes is set to 4. However, the number of I/O server processes used by Recovery Manager is set to 4 only if asynchronous I/O is disabled (either your platform does not support asynchronous I/O or disk_asynch_io is set to false. Typically, I/O server processes are used to simulate asynchronous I/O on platforms that do not support asynchronous I/O or that implement it inefficiently. However, you can use I/O server processes even when asynchronous I/O is being used. In that case the I/O server processes will use asynchronous I/O. I/O server processes are also useful in database environments with very large I/O throughput, even if asynchronous I/O is enabled. Here are some suggested rules of thumb for determining what value to set maximum number of servers to: The first rule of thumb suggests that you limit the maximum number of servers to a number equal to ten times the number of disks that are to be used concurrently, but not more than 80. The minimum number of servers should be set to half of this maximum number. Another rule of thumb is to set the maximum number of servers to 80 and leave the minimum number of servers set to the default of 1 and reboot. Monitor the number of additional servers started throughout the course of normal workload. After a 24-hour period of normal activity, set the maximum number of servers to the number of currently running aios + 10, and set the minimum number of servers to the number of currently running aios - 10. In some environments you may see more than 80 aios KPROCs running. If so, consider the third rule of thumb. A third suggestion is to take statistics using vmstat -s before any high I/O activity begins, and again at the end. Check the field iodone. From this you can determine how many physical I/Os are being handled in a given wall clock period. Then increase the maximum number of servers and see if you can get more iodones in the same time period.

Deutsche Oracle Anwenderkonferenz DOAG 2011

34© Copyright IBM Corporation 2011

IO : Asynchronous IO (AIO) tuning

Rule of thumb :

maxservers should be = (10 * <# of disk accessed concurrently>) / # cpu

maxreqs (= a multiple of 4096) should be > 4 * #disks * queue_depth

but only tests allow to set correctly minservers and maxservers

• important tuning parameters :

check AIO configuration with : AIX 5.X : lsattr –El aio0 AIX 6.1 : ioo –L | grep aio

• maxreqs : size of the Asynchronous queue.

• minserver : number of kernel proc. Aioservers to start (AIX 5L system wide).

• maxserver : maximum number of aioserver that can be running per logical CPU

Monitoring :In Oracle’s alert.log file, if maxservers set to low : “Warning: lio_listio returned EAGAIN” “Performance degradation may be seen”

#aio servers used can be monitored via “ps –k | grep aio | wc –l” , “iostat –A” or nmon (option A)

Presenter
Presentation Notes
Use legacy aio. DBWR_IO_SLAVES is relevant only on systems with only one database writer process (DBW0). It specifies the number of I/O server processes used by the DBW0 process. The DBW0 process and its server processes always write to disk. By default, the value is 0 and I/O server processes are not used. If you set DBWR_IO_SLAVES to a nonzero value, the number of I/O server processes used by the ARCH and LGWR processes is set to 4. However, the number of I/O server processes used by Recovery Manager is set to 4 only if asynchronous I/O is disabled (either your platform does not support asynchronous I/O or disk_asynch_io is set to false. Typically, I/O server processes are used to simulate asynchronous I/O on platforms that do not support asynchronous I/O or that implement it inefficiently. However, you can use I/O server processes even when asynchronous I/O is being used. In that case the I/O server processes will use asynchronous I/O. I/O server processes are also useful in database environments with very large I/O throughput, even if asynchronous I/O is enabled. Here are some suggested rules of thumb for determining what value to set maximum number of servers to: The first rule of thumb suggests that you limit the maximum number of servers to a number equal to ten times the number of disks that are to be used concurrently, but not more than 80. The minimum number of servers should be set to half of this maximum number. Another rule of thumb is to set the maximum number of servers to 80 and leave the minimum number of servers set to the default of 1 and reboot. Monitor the number of additional servers started throughout the course of normal workload. After a 24-hour period of normal activity, set the maximum number of servers to the number of currently running aios + 10, and set the minimum number of servers to the number of currently running aios - 10. In some environments you may see more than 80 aios KPROCs running. If so, consider the third rule of thumb. A third suggestion is to take statistics using vmstat -s before any high I/O activity begins, and again at the end. Check the field iodone. From this you can determine how many physical I/Os are being handled in a given wall clock period. Then increase the maximum number of servers and see if you can get more iodones in the same time period.

Deutsche Oracle Anwenderkonferenz DOAG 2011

35© Copyright IBM Corporation 2011

IO : Asynchronous IO (AIO) fastpath

• FS with CIO/DIO and AIX 5.3 TL5+ :Activate fsfast_path (comparable to fast_path but for FS + CIO/DIO)

AIX 5L : adding the following line in /etc/inittab: aioo:2:once:aioo –o fsfast_path=1AIX 6.1 : ioo –p –o aio_fsfastpath=1 (default setting)

• Raw Devices / ASM :check AIO configuration with : lsattr –El aio0

enable asynchronous IO fast_path. : AIX 5L : chdev -a fastpath=enable -l aio0 (default since AIX 5.3)AIX 6.1 : ioo –p –o aio_fastpath=1 (default setting)

Application Disk

1

2

3

With fast_path, IO are queued directly from the application into the LVM layer without any “aioservers kproc” operation.

Better performance compare to non-fast_pathNo need to tune the min and max aioserversNo ioservers proc. => “ps –k | grep aio | wc –l” is not relevent, use “iostat –A” instead

AIX Kernel

Presenter
Presentation Notes
Use legacy aio. DBWR_IO_SLAVES is relevant only on systems with only one database writer process (DBW0). It specifies the number of I/O server processes used by the DBW0 process. The DBW0 process and its server processes always write to disk. By default, the value is 0 and I/O server processes are not used. If you set DBWR_IO_SLAVES to a nonzero value, the number of I/O server processes used by the ARCH and LGWR processes is set to 4. However, the number of I/O server processes used by Recovery Manager is set to 4 only if asynchronous I/O is disabled (either your platform does not support asynchronous I/O or disk_asynch_io is set to false. Typically, I/O server processes are used to simulate asynchronous I/O on platforms that do not support asynchronous I/O or that implement it inefficiently. However, you can use I/O server processes even when asynchronous I/O is being used. In that case the I/O server processes will use asynchronous I/O. I/O server processes are also useful in database environments with very large I/O throughput, even if asynchronous I/O is enabled. Here are some suggested rules of thumb for determining what value to set maximum number of servers to: The first rule of thumb suggests that you limit the maximum number of servers to a number equal to ten times the number of disks that are to be used concurrently, but not more than 80. The minimum number of servers should be set to half of this maximum number. Another rule of thumb is to set the maximum number of servers to 80 and leave the minimum number of servers set to the default of 1 and reboot. Monitor the number of additional servers started throughout the course of normal workload. After a 24-hour period of normal activity, set the maximum number of servers to the number of currently running aios + 10, and set the minimum number of servers to the number of currently running aios - 10. In some environments you may see more than 80 aios KPROCs running. If so, consider the third rule of thumb. A third suggestion is to take statistics using vmstat -s before any high I/O activity begins, and again at the end. Check the field iodone. From this you can determine how many physical I/Os are being handled in a given wall clock period. Then increase the maximum number of servers and see if you can get more iodones in the same time period.

Deutsche Oracle Anwenderkonferenz DOAG 2011

36© Copyright IBM Corporation 2011

How to set filesystemio_options parameter

Possible values

ASYNCH : enables asynchronous I/O on file system files (default)DIRECTIO : enables direct I/O on file system files (disables AIO)SETALL : enables both asynchronous and direct I/O on file system filesNONE : disables both asynchronous and direct I/O on file system files

Since the version 10g, Oracle will open data files located on the JFS2 file system with the O_CIO option if the filesystemio_options initialization parameter is set to either directIO or setall.

Advice : set this parameter to ‘ASYNCH’, and let the system managed CIO via mount option (see CIO/DIO implementation advices) …

Note : set the disk_asynch_io parameter to ‘true’ as well

IO : AIO,DIO/CIO & Oracle Parameters

Deutsche Oracle Anwenderkonferenz DOAG 2011

37© Copyright IBM Corporation 2011

NUMA architecture

• NUMA stands for Non uniform memory access. It is a computer memory design used in multiprocessors, where the memory access time depends on the memory location relative to a processor. Under NUMA, a processor can access its own local memory faster than non-local memory, that is, memory local to another processor or memory shared between processors.

Deutsche Oracle Anwenderkonferenz DOAG 2011

38© Copyright IBM Corporation 2011

Oracle NUMA feature

Oracle DB NUMA support have been introduced since 1998 on the first NUMA systems. It provides a memory/processes models relying on specific OS features to better perform on this kind of architecture.

Since 10.2.0.4 NUMA features are enabled by default on x86 hardware.

On AIX, the NUMA support code has been ported, default is off.

Deutsche Oracle Anwenderkonferenz DOAG 2011

39© Copyright IBM Corporation 2011

Oracle NUMA parameters

The following shows all the hidden parameters that include ‘NUMA’ in the name:

• _NUMA_instance_mapping Not specified• _NUMA_pool_size Not specified• _db_block_numa 1• _enable_NUMA_interleave TRUE• _enable_NUMA_optimization FALSE• _enable_NUMA_support FALSE• _numa_buffer_cache_stats 0• _numa_trace_level 0• _rm_numa_sched_enable TRUE• _rm_numa_simulation_cpus 0• _rm_numa_simulation_pgs 0

Deutsche Oracle Anwenderkonferenz DOAG 2011

40© Copyright IBM Corporation 2011

Enable Oracle NUMA Rset Naming

• Oracle 11g has NUMA optimization disabled by default. • _enable_NUMA_support=true is required to enable NUMA

features.

• When _enable_NUMA_support = TRUE, Oracle checks for AIX rset named “${ORACLE_SID}/0” at startup.

• For now, it is assumed that it will use rsets ${ORACLE_SID}/0, ${ORACLE_SID}/1, ${ORACLE_SID}/2, etc if they exist.

Deutsche Oracle Anwenderkonferenz DOAG 2011

41© Copyright IBM Corporation 2011

Prepare system for Oracle NUMA Optimization The test is done on a POWER7 machine with the following CPU and memory distribution

(dedicated LPAR). It has 4 domains with 8 CPU and >27GB each. If the lssrad output shows unevenly distributed domains, fix the problem before proceeding.

• Listing SRAD (Affinity Domain)# lssrad -vaREF1 SRAD MEM CPU0

0 27932.94 0-311 31285.00 32-63

12 29701.00 64-953 29701.00 96-127

• We will set up 4 rsets, namely SA1/0, SA1/1, SA1/2, and SA1/3, one for each domain.# mkrset -c 0-31 -m 0 SA1/0# mkrset -c 32-63 -m 0 SA1/1# mkrset -c 64-95 -m 0 SA1/2# mkrset -c 96-127 -m 0 SA1/3

• Required Oracle User Capabilities # lsuser -a capabilities oracleoracle capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE

Deutsche Oracle Anwenderkonferenz DOAG 2011

42© Copyright IBM Corporation 2011

_enable_NUMA_Support

• The system is ready to be tested. So, we enable the NUMA parameters in the init.ora parameters file.

#*._enable_NUMA_optimization=true #deprecated, use _enable_NUMA_support

*._enable_NUMA_support=true

Deutsche Oracle Anwenderkonferenz DOAG 2011

43© Copyright IBM Corporation 2011

Making Process Private Memory Local

• Before starting the DB, let’s set vmo options to cause process private memory to be local.

# vmo -o memplace_data=1 -o memplace_stack=1

Deutsche Oracle Anwenderkonferenz DOAG 2011

44© Copyright IBM Corporation 2011

Starting the NUMA enabled Database

• Once started Oracle set the following parameter_NUMA_instance_mapping Not specified_NUMA_pool_size Not specified_db_block_numa 4_enable_NUMA_interleave TRUE_enable_NUMA_optimization FALSE_enable_NUMA_support TRUE_numa_buffer_cache_stats 0_numa_trace_level 0_rm_numa_sched_enable TRUE_rm_numa_simulation_cpus 0_rm_numa_simulation_pgs 0

• The following messages are found in the alert log. It finds the 4 rsets and treats them as NUMA domains.

LICENSE_MAX_USERS = 0SYS auditing is disabledNUMA system found and support enabled (4 domains - 32,32,32,32)Starting up Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production

Deutsche Oracle Anwenderkonferenz DOAG 2011

45© Copyright IBM Corporation 2011

Oracle shared segments differences

• Check the shared memory segments. There are total of 7, one of which owned by ASM. The SA1 has 6 shared memory segments instead of 1.

# ipcs -ma|grep oracle Without NUMA optimization– m 2097156 0x8c524c30 --rw-rw---- oracle dba oracle dba 31 285220864 13369718

23987126 23:15:57 23:15:57 12:41:22– m 159383709 0x3b5a2ebc --rw-rw---- oracle dba oracle dba 59 54089760768 17105120

23331318 23:16:13 23:16:13 23:15:45

# ipcs -ma|grep oracle With NUMA optimization– m 2097156 0x8c524c30 --rw-rw---- oracle dba oracle dba 29 285220864 13369718 7405688

23:27:32 23:32:38 12:41:22– m 702545926 00000000 --rw-rw---- oracle dba oracle dba 59 2952790016 23987134

23920648 23:32:42 23:32:42 23:27:21– m 549453831 00000000 --rw-rw---- oracle dba oracle dba 59 2952790016 23987134

23920648 23:32:42 23:32:42 23:27:21– m 365953095 0x3b5a2ebc --rw-rw---- oracle dba oracle dba 59 20480 23987134 23920648

23:32:42 23:32:42 23:27:21– m 1055916188 00000000 --rw-rw---- oracle dba oracle dba 59 3087007744 23987134

23920648 23:32:42 23:32:42 23:27:21– m 161480861 00000000 --rw-rw---- oracle dba oracle dba 59 42144366592 23987134

23920648 23:32:42 23:32:42 23:27:21– m 333447326 00000000 --rw-rw---- oracle dba oracle dba 59 2952790016 23987134

23920648 23:32:42 23:32:42 23:27:21

wit

Deutsche Oracle Anwenderkonferenz DOAG 2011

46© Copyright IBM Corporation 2011

Post-Setup background processes inspections

• Take a look at the Oracle processes. The Oracle background processes that have multiple instantiations are automatically attached to the rsets in a round-robin fashion.

• All other Oracle background processes are also attached to an rset, mostly SA1/0. lgwr and psp0 are attached to SA1/1.

Deutsche Oracle Anwenderkonferenz DOAG 2011

47© Copyright IBM Corporation 2011

Standard Oracle Memory Structures

Java pool Databasebuffer cache

Redo log buffer

Shared pool Large pool

SGA shm

Streams pool

Serverprocess

1PGA

Serverprocess

2PGA Background

process PGA

Deutsche Oracle Anwenderkonferenz DOAG 2011

48© Copyright IBM Corporation 2011

NUMA Oracle Memory Structures (to be confirmed)

Databasebuffer cache

+Numa pool

Redo log buffer

Serverprocess

1PGA

Serverprocess

2PGA Background

process PGA

Databasebuffer cache

+ numa pool

Databasebuffer cache

+Numa pool

Databasebuffer cache

+Numa pool

SRAD0 SRAD1 SRAD2 SRAD3

Stripped - Large pool + Shared pool + Streams pool + Java pool

Presenter
Presentation Notes
Explain that the process mapping looks similar to a cluster with local processes to each SRAD ( = each Resource Set)

Deutsche Oracle Anwenderkonferenz DOAG 2011

49© Copyright IBM Corporation 2011

What Goes into Which Shared Memory Segment?

• Notice that the shared pool is split into shared pool and numa pool. Shared pool items that can be partitioned are moved to the numa_pool.

• Notice also that the log_buffer is piggy backed onto the affinitized shared memory segment on SRAD1. This is where the lgwr is attached to.

Deutsche Oracle Anwenderkonferenz DOAG 2011

50© Copyright IBM Corporation 2011

Affinitizing User Connections

• If Oracle shadow processes are allowed to migrate across domains, the benefit of NUMA-enabling Oracle will be lost. Therefore, arrangements need to be made to affinitize the user connections.

• For network connections, multiple listeners can be arranged with each listener affinitized to a different domain. The Oracle shadow processes are children of the individual listeners and inherit the affinity from the listener.

• For local connections, the client process can be affinitized to the desired domain/rset. These connections do not go through any listener, and the shadows are children of the individual clients and inherit the affinity from the client. Example below using sqlplus as local client:

– # attachrset SA1/2 $$– # sqlplus itsme/itsme

Deutsche Oracle Anwenderkonferenz DOAG 2011

51© Copyright IBM Corporation 2011

listener.ora#----------------------------------------------------------------

LISTENER0 =(DESCRIPTION_LIST =

(DESCRIPTION =(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = IPC)(KEY = IPC0)(QUEUESIZE = 200)))

))

#----------------------------------------------------------------

LISTENER1 =(DESCRIPTION_LIST =

(DESCRIPTION =(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = IPC)(KEY = IPC1)(QUEUESIZE = 200)))

))

SQLnet configuration example for a 4 Oracle numa nodes configuration and IPC protocol (SID=SA)

tnsnames.oraSA0 =

(DESCRIPTION =(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = IPC)(KEY = IPC0)))(CONNECT_DATA =

(SERVICE_NAME = SA)) )

SA1 =(DESCRIPTION =

(ADDRESS_LIST =(ADDRESS = (PROTOCOL = IPC)(KEY = IPC1))

)(CONNECT_DATA =

(SERVICE_NAME = SA)) )

…SA =

(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=IPC0)(QUEUESIZE = 1000))(ADDRESS=(PROTOCOL=IPC)(KEY=IPC1)(QUEUESIZE = 1000))(ADDRESS=(PROTOCOL=IPC)(KEY=IPC2)(QUEUESIZE = 1000))(ADDRESS=(PROTOCOL=IPC)(KEY=IPC3)(QUEUESIZE = 10000)) )

Deutsche Oracle Anwenderkonferenz DOAG 2011

52© Copyright IBM Corporation 2011

Listener startup procedure example

• Listeners startup procedure# for i in 0 1 2 3 do

execrset SA1/${i} su - oracle -c "export MEMORY_AFFINITY=MCMlsnrctl stop LISTENER${i}lsnrctl start LISTENER${i} "

done

• Database registration: the init.ora service_names parameter should be set to: *.service_names = SA# su - oracle -c sqlplus sys/ibm4tem3 as sysdba <<EOFalter system register;EOF

Deutsche Oracle Anwenderkonferenz DOAG 2011

53© Copyright IBM Corporation 2011

A Simple Performance Test

• Four Oracle users each having it’s own schema and tables are defined. The 4 schemas are identical other than the name.

• Each user connection performs some query using random numbers as keys and repeats the operation until the end of the test.

• The DB cache is big enough to hold the entirety of all the 4 schemas. therefore, it is an in-memory test.

• All test cases are the same, except domain-attachment control. Each test runs a total of 256 connections, 64 of each oracle user.

Deutsche Oracle Anwenderkonferenz DOAG 2011

54© Copyright IBM Corporation 2011

Relative Performance

Case 0 Case 1 Case 2 Case 3 Case 4 Case 5

NUMA config No Yes Yes Yes Yes Yes

Connection affinity No No RoundRobin RoundRobin Partitioned Partitioned

NUMA_interleave NA Yes Yes No Yes No

Relative performance 100% 113% 113% 112% 137% 144%

*RoundRobin = 16 connections of each oracle user run in the each domain;

Partitioned = 64 connections of 1 oracle user run in each domain.

*Partioned = listener is attached to the Rsetthe relative performance shown applies only to this individual test, and can vary

widely with different workloads.

Presenter
Presentation Notes
“Partitioned = listener is attached to the Rset

Deutsche Oracle Anwenderkonferenz DOAG 2011

55© Copyright IBM Corporation 2011

Performance Analysis

• It is apparent that the biggest boost NUMA optimization achieves is when user connections are affinitized, and the workload does not share data across domains (cases 4 and 5).

• Cases 2 and 3 have user connection affinity, but the workload shares data across domains. In this sample test, it does not perform any better than without connection affinity (case 1).

• The meaning of _enable_NUMA_interleave parameter is not detailed. One would assume that it enables/disables some kind of memory interleaving. But consider test cases 2 and 3, the throughput is the same regardless whether it is enabled. It might be because these test cases have a low memory locality anyway. In contrast, test case 5 has a decent boost over case 4 by disabling interleaving. What this says is that if the workload already has a high memory locality, disabling _enable_NUMA_interleaving can further increase memory locality.

Deutsche Oracle Anwenderkonferenz DOAG 2011

56© Copyright IBM Corporation 2011

A real Benchmark result

• Very promising features for CPU bound workload– Speed up up to 45% on an ISV benchmark case, p780 & 11gR2– Applicable from 740 to 795 LPARs with more than 8 cores

• Further benchmark tests under progress– NUMA_optimization parameter– Size of the Oracle rsets: P7 chip vs P7 node– Compatibility with SPLPARs, AME …

Presenter
Presentation Notes
ISV test ( Temenos) +45% perf TPM

Deutsche Oracle Anwenderkonferenz DOAG 2011

57© Copyright IBM Corporation 2011

Thank youQuestions ?

mailto:[email protected]