in rac we trust oracle - from dream to production plamen zyumbyulev,, let someone k n o w ” bgoug...
TRANSCRIPT
In RAC We Trust
ORACLE - From Dream To Production
Plamen Zyumbyulev
,, Let someone k n o w ”
BGOUG – Gabrovo 22.04.2005
Presentation Goals
Describe the major steps in RAC implementation
Show and explain the main problems and obstacles
Show and explain solution and workarounds
Give some practical ideas and variations
Try to look at the problems from different angles
Non-Goals Explain RAC concepts and fundamentals
Show ALL aspects of RAC
Agenda
Introduction
RAC Installation
High Availability Configuring
Testing and Tuning RAC
Implementing RAC in production
Introduction
RAC Installation
Problems
Solutions
Problems
Need to evaluate and purchase cluster hardware
Need of knowledgeable OS, High Availability, Network and Storage professionals
All this takes time
Solutions
RAC on Single Node
RAC on Single VMware Node
RAC on Multiple VMware Nodes
RAC and Network Block Device
All of the solutions presented here are for testing purposes ONLY. These configurations are not
certified or supported by Oracle Support Services
Other Solutions
RAC on Single Node
Why not?
Metalink Note:241114.1 - Step-By-Step Installation of RAC on Linux - Single Node (Oracle9i 9.2.0 with OCFS)
Key Points No need of “fencing” configuration No need of using clustered file system or raw devices No need of multiple oracle homes (ORACLE_HOME)
RAC on Single Node (cont’d)
Key Points (cont’d) One oracle user with 2 or more profiles one for every
instance. (e.g. .rac1, .rac2, …)
zyumix:/# su - oracle
oracle@zyumix:~$ . rac1 oracle@zyumix:~$ echo $ORACLE_SID rac1
zyumix:/# su - oracle oracle@zyumix:~$ . rac2 oracle@zyumix:~$ echo $ORACLE_SID rac2
RAC on Single Node (cont’d)
Key Points (cont’d) Oracle Universal Installer needs Clusterware software in
order to install RAC option.
Disadvantages
RAC on Single Node (cont’d)
Server AInstance ‘A’
Server BInstance ‘B’
DatabaseDatabase
Client side load balancing
Server side load balancing
Listener‘A’
Listener‘B’
RAC on Single VMware Node
Even easier !!! The Oracle-on-Linux VMware Cookbook -
http://www.oracle.com/technology/tech/linux/vmware/cookbook/index.html
An easy, hands-on, step-by-step guide describing how-to install VMware, Linux (RHEL/SLES) and Oracle RAC (again on single node)
VMware Workstation (90-day free eval; registration required) RHEL3 (not free) , SLES8 (not free)
Disadvantages
RAC on Multiple VMware Nodes
VMware GSX/ESX Server permits the sharing of plain disks with multiple virtual machines running on the same host, provided the disk in question is a SCSI disk.
This approach is very powerful but complex. You can create very complex environments – multiple NICs, switches, disks etc.
Now there are a number of nodes although virtual
Disadvantages
RAC and Network Block Device
This solution allows you to build scalable and high available database system only with common Intel PCs connected into Ethernet network.
In this solution, a standard shared disk subsystem is replaced by a native Linux technology - Network Block Device (NBD) that maps remote files to local block devices (e.g. /dev/nb0) via TCP/IP network. One computer (not necessarily Linux machine) serves as data storage for all cluster nodes (Linux machines) instead of expensive disk array.
RAC and Network Block Device (cont’d)
With this thing compiled into the kernel, Linux can use a remote server as one of its block devices. Every time the client computer wants to read /dev/nd0, it will send a request to the server via TCP, which will reply with the data requested.
The remote resource doesn't need to be a whole disk or even a partition. It can be a file.
RAC and Network Block Device (cont’d)
Typical configuration Simple NBD configuration
RAC and Network Block Device (cont’d)
InstallationBoth client and server machines are with RHEL3 Download source from http://nbd.sourceforge.net/ As root do
bunzip2 nbd-2.7.3.tar.bz2 tar -xvf nbd-2.7.3.tar cd nbd-2.7.3 ./configure make make install
RAC and Network Block Device (cont’d)
Creating new empty files at NBD server [root@rac2 root]# dd if=/dev/zero of=/u01/oradata/rac/system.01 count=300 bs=1M300+0 records in300+0 records out314572800 bytes transferred in 1.683993 seconds (186801738 bytes/sec)[root@rac2 root]#
Running NBD serverSyntax: nbd-server <port> <filename> [root@rac2 root]# nbd-server 4101 /u01/oradata/rac/system.01
[root@rac2 root]#
RAC and Network Block Device (cont’d)
NBD clientNBD client must be run as root (because of kernel parts of NBD). Before starting NBD client you would have to install Linux kernel NBD module
Installing ndb module – RHEL3[root@rac3 root]# rpm -Uvh kernel-unsupported-2.4.21-4.EL.i686.rpm
warning: kernel-unsupported-2.4.21-4.EL.i686.rpm: V3 DSA signature: NOKEY, key ID db42a60e
Preparing... ########################################### [100%]
1:kernel-unsupported ########################################### [100%]
[root@rac3 root]#
RAC and Network Block Device (cont’d)
NBD client (cont’d) Loading nbd module
[root@rac3 dev]# lsmod | grep nbd[root@rac3 dev]# modprobe nbd[root@rac3 dev]# lsmod | grep nbdnbd 16388 0 (unused)[root@rac3 dev]#
running nbd clientSyntax: nbd-client <data server> <port> /dev/nb<n> [root@rac3 dev]# nbd-client rac2 4101 /dev/nb0Negotiation: ..size = 307200KBbs=1024, sz=307200[root@rac3 dev]#
RAC and Network Block Device (cont’d)
Now block devices are configured and it is possible to access remote data. Oracle Real application clusters need raw access to shared disk subsystem so mapping raw devices to block devices is needed. This could by done with standard raw command.
Syntax: raw /dev/raw/raw<N> /dev/<blockdev>
[root@rac3 root]# raw /dev/raw/raw1 /dev/nb0
/dev/raw/raw1: bound to major 43, minor 0
[root@rac3 root]#
Other Solutions
RAC and FireWireBuild Your Own Oracle RAC 10g Cluster on Linux
and FireWire http://www.oracle.com/technology/pub/articles/hunter_rac10g.html
RAC and NFSLockingCachingWrite through cache
HA Configuration
Simplified RAC schema
DatabaseDatabase
NET
HA System
Maximum Availability Architecture
Dedicated Network
Primary Site
Application Server
Secondary Site
Application Server
Data Guard
Extended Distance Clusters
Dedicated Network
Primary Site
RAC
Application Server
Secondary Site
Application Server
Virtualization storage layer
What about the Quorum Server???
Extended Distance Clusters (cont’d)
Resolving Distance Problems Application partitioning
gc_files_to_locks Wavelength Division Multiplexing
Dense Wavelength Division Multiplexing DWDM Coarse Wavelength Division Multiplexing CWDM
ACTIVE_INSTANCE_COUNT *.active_instance_count = 1 *.cluster_database_instances = 2
Testing and Tuning RAC
Introduction RAC testing steps
Functional Application Tests RAC High Availability tests Scalability tests
Digging into RAC performance problems
Introduction
Testing isn’t trivial !!!
Classical testing/tuning methods. Always tune single instance first!
Specific RAC issues
RAC aware tools
RAC testing steps
Functional Application Tests
RAC High Availability tests Be aware about the timeouts!!!
Scalability tests
RAC testing steps (cont’d)
Scalability tests (cont’d)
Patterns of application scalability
Load ( # users, size of tables)
Per
form
ance
(T
PS
, 1/r
espo
nse
tim
e)
of o
ne u
ser
exponential constrained
linear
nearly static
RAC testing steps (cont’d)
0
1000
2000
3000
4000
5000
6000
7000
4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80 84 88 92 96 100
Single node
2 node RAC
Scalability tests (cont’d) Good scalability
Per
form
ance
-T
PS
(fo
r al
l use
rs)
# of concurrent users
RAC testing steps (cont’d)
0
1000
2000
3000
4000
5000
6000
7000
4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80 84 88 92 96 100
Single node
2 node RAC
Scalability tests (cont’d) Problem!!! Possible disk bottleneck
Per
form
ance
-T
PS
(fo
r al
l use
rs)
# of concurrent users
RAC testing steps (cont’d)
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80 84 88 92 96 100
Single node
2 node RAC
Scalability tests (cont’d) Problem!!! Possible interconnect bottleneck
Per
form
ance
-T
PS
(fo
r al
l use
rs)
# of concurrent users
Digging into RAC performance problems
Interconnect and shared storage are the two most possible performance problem areas in RAC
Interconnect “speed” Throughput Latency
average latency of a consistent block request. AVG CR BLOCK RECEIVE TIME should typically be about 15 milliseconds
depending -- on your system configuration and volume
Digging into RAC performance problems (cont’d)
Interconnect typesMeasurement
SMP
BUS
Memory
Channel
Myrinet Sun SCI Gigabit
Ethernet
Hyper Fabric
Infiniband
Latency (μs)
0.5 3 7 to 9 10 100 20 < 10
CPU overhead
low low low N/A high low low
Bandwidth MB/sec
> 500 > 100 ~250 ~ 70 100 400 1000
Digging into RAC performance problems (cont’d)
cluster_interconnects parameterIt provides Oracle with information about additional cluster interconnects available for use and can be used to load balance the interconnect traffic to different physical interconnects thus increasing interconnect bandwith.
When you set CLUSTER_INTERCONNECTS in cluster configurations, the interconnect high availability features are not available. In other words, an interconnect failure that is normally unnoticeable would instead cause an Oracle cluster failure as Oracle still attempts to access the network interface which has gone down.
Digging into RAC performance problems (cont’d)
STATSPACK reports The STATSPACK report show statistics ONLY for the
node or instance on which it was run Top 5 Timed Events Global Cache Service and Global Enqueue Service
Note:135714.1 Script to Collect RAC Diagnostic Information (racdiag.sql)
Digging into RAC performance problems (cont’d)
BAD PERFORMANCE
Top 5 Timed Events~~~~~~~~~~~~~~~~~~ % TotalEvent Waits Time (s) Ela Time--------------------------------------- ------------ ----------- --------global cache cr request 34,568 958 31.44buffer busy global CR 6,513 620 20.35db file sequential read 64,214 455 14.92latch free 13,542 453 14.88buffer busy waits 10,971 295 9.69
GOOD PERFORMANCE
Top 5 Timed Events~~~~~~~~~~~~~~~~~~ % TotalEvent Waits Time (s) Ela Time--------------------------------------- ------------ ----------- --------latch free 10,969 666 51.28buffer busy waits 15,379 159 12.23CPU time 149 11.47PL/SQL lock timer 51 105 8.05db file sequential read 25,163 96 7.36
Digging into RAC performance problems (cont’d)
Global Cache Service - Workload Characteristics BAD GOOD-----------------------------------------------Ave global cache get time (ms): 11.8 2.2 Ave global cache convert time (ms): 51.7 11.2 Ave build time for CR block (ms): 0.7 0.0 Ave flush time for CR block (ms): 0.2 0.2 Ave send time for CR block (ms): 0.0 0.2 Ave time to process CR block request (ms): 0.9 0.4 Ave receive time for CR block (ms): 1.6 0.4 Ave pin time for current block (ms): 0.2 0.2 Ave flush time for current block (ms): 0.0 0.0 Ave send time for current block (ms): 0.1 0.1 Ave time to process current block request (ms): 0.3 0.3 Ave receive time for current block (ms): 33.4 7.5 Global cache hit ratio: 9.5 3.9 Ratio of current block defers: 0.0 0.0 % of messages sent for buffer gets: 6.7 2.5 % of remote buffer gets: 1.8 0.7 Ratio of I/O for coherence: 1.2 1.3 Ratio of local vs remote work: 4.3 4.4 Ratio of fusion vs physical writes: 0.0 0.0
Digging into RAC performance problems (cont’d)
Global Enqueue Service Statistics BAD GOOD---------------------------------Ave global lock get time (ms): 0.2 0.0 Ave global lock convert time (ms): 0.0 0.0 Ratio of global lock gets vs global lock releases: 1.2 1.1
GCS and GES Messaging statistics--------------------------------Ave message sent queue time (ms): 16.5 1.7 Ave message sent queue time on ksxp (ms): 29.4 2.5 Ave message received queue time (ms): 1.9 0.3 Ave GCS message process time (ms): 0.1 0.1 Ave GES message process time (ms): 0.1 0.0 % of direct sent messages: 49.7 63.4 % of indirect sent messages: 50.3 36.6 % of flow controlled messages: 0.0 0.0
Implementing RAC in production
Smooth transition from single instance to RAC
Change ORACLE_HOME
Relinking the RAC Option ON/OFF CLUSTER_DATABASE = TRUE/FALSE Start/Stop the second instance
Start gradual movement of clients from one instance to another
Relinking the RAC Option
1. Login as the Oracle software owner and shutdown all database instances on all nodes in the cluster.
2. cd $ORACLE_HOME/rdbms/lib
3. make -f ins_rdbms.mk rac_on (rac_off)
If this step did not fail with fatal errors then proceed to step 4.
4. make -f ins_rdbms.mk ioracle
Reference
Metalink Note:211177.1 RAC Survival Kit: Rac On / Rac Off - Relinking the RAC Option
Metalink Note:183340.1 Frequently Asked Questions About the CLUSTER_INTERCONNECTS Parameter in 9i.
http://www.fi.muni.cz/~kripac/orac-nbd/
The Oracle-on-Linux VMware Cookbookhttp://www.oracle.com/technology/tech/linux/vmware/cookbook/index.html
Build Your Own Oracle RAC 10g Cluster on Linux and FireWire http://www.oracle.com/technology/pub/articles/hunter_rac10g.html
Note:135714.1 Script to Collect RAC Diagnostic Information (racdiag.sql)
Thank You