oracle rac 12c release 2 - overview
TRANSCRIPT
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
Oracle Real Applica@on Clusters (RAC) 12c Release 2 – Overview
Markus Michalewicz Senior Director of Product Management, Oracle RAC Development December 6th, 2016
[email protected] @OracleRACpm hPp://www.linkedin.com/in/markusmichalewicz hPp://www.slideshare.net/MarkusMichalewicz
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. | 3
Oracle Database 12c Rel. 2 Real Applica@on Clusters (RAC) It’s all about scalability, availability and efficient management
BePer availability (due to reduced
reconfigura@on @mes)
BePer scalability (for singleton services)
Efficient management for large scale deployments
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. | 4
Oracle Database 12c Rel. 2 Real Applica@on Clusters (RAC) It’s all about scalability, availability and efficient management
BePer availability (due to reduced
reconfigura@on @mes)
BePer scalability (for singleton services)
Efficient management for large scale deployments
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. | 5
Oracle RAC Scalability – A Brief Review
Oracle RAC scalability • is independent of the
number of nodes • does not require
applica@on changes (unlike sharding)
Oracle RAC scales • most of the
enterprise solu@ons used today ✔
Oracle RAC scales • Oracle Mul@tenant
Oracle RAC scales • Oracle Database
In-‐Memory
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
✔ hPp://www.slideshare.net/MarkusMichalewicz/oracle-‐rac-‐internals-‐the-‐cache-‐fusion-‐edi@on
hPp://www.slideshare.net/MarkusMichalewicz/
paper-‐oracle-‐rac-‐internals-‐the-‐cache-‐fusion-‐edi@on
hPp://www.slideshare.net/MarkusMichalewicz/oracle-‐rac-‐customer-‐proven-‐scalalbility
6
www.slideshare.net/MarkusMichalewicz/oracle-‐mul@tenant-‐meets-‐oracle-‐rac-‐ioug-‐2014-‐version
hPp://www.slideshare.net/MarkusMichalewicz/oracle-‐database-‐inmemory-‐meets-‐oracle-‐rac
Oracle RAC Scalability – More Informa@on
h"p://www.slideshare.net/MarkusMichalewicz
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
Improved scaling for all-‐HUB, Standalone Clusters
7
Flex Cluster-‐based Scaling
Oracle RAC 12c Release 2 – Scaling in Two Dimensions
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
Improved scaling for all-‐HUB, Standalone Clusters
8
Flex Cluster-‐based Scaling
Oracle RAC 12c Release 2 – Scaling in Two Dimensions
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
Service-‐oriented Buffer Cache Access determines the data (on database object level) accessed by the
service and masters this data on the node on which the (singleton) service is offered, which improves data access performance.
9
Pluggable Database and Service IsolaKon improves performance by reducing DLM opera@ons for PDBs /
Services not offered in all instances and op@mizing block opera@ons based on in-‐memory block separa@on.
Op@mized Singleton Workload Scaling
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
Improved scaling for all-‐HUB, Standalone Clusters
10
Flex Cluster-‐based Scaling
Oracle RAC 12c Release 2 – Scaling in Two Dimensions
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
Improved scaling for all-‐HUB, Standalone Clusters
11
Flex Cluster-‐based Scaling
Oracle RAC 12c Release 2 – Scaling in Two Dimensions
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
Introduced during OOW 2013: hPp://www.slideshare.net/
MarkusMichalewicz/understanding-‐oracle-‐rac-‐12c-‐internals-‐oow13-‐con8806
12
Recommended during OOW 2014:
hPp://www.slideshare.net/MarkusMichalewicz/oracle-‐
rac-‐12102-‐opera@onal-‐best-‐prac@ces
The standard going forward (every Oracle 12c Rel. 2 cluster is a Flex Cluster by default.)
Oracle Flex Cluster -‐ A Brief Review
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. | 13
Under the Hood: Any New Install Ends Up in a Flex Cluster
[GRID]> crsctl get cluster name CRS-‐6724: Current cluster name is 'SolarCluster' [GRID]> crsctl get cluster class CRS-‐41008: Cluster class is 'Standalone Cluster' [GRID]> crsctl get cluster type CRS-‐6539: The cluster type is 'flex'.
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
Use Case 1: Massive Parallel Query RAC Overlay your Hadoop Cluster (HDFS) with an Oracle Flex Cluster to access data in Hadoop via SQL and perform cross-‐data (adhoc) analysis using standard interfaces.
14
Use Case 2: RAC Reader Nodes Use Read-‐Only workload (WL) on read-‐mostly Leaf node instances for adoc data analysis scaled across hundreds of nodes with no delay in accessing updated data, without any
impact on OLTP performance* and with bePer HA**
Oracle Flex Cluster – the Scalable Architecture
** A Leaf node failure does not impact any other node.
* Read-‐only WL on Leaf-‐instances
will scale.
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
Connect Leaf nodes to storage Leaf nodes for applica@ons do not require direct storage access;
running database instances on Leaf nodes does.
15
Install Oracle Database Home on all nodes and as needed.
If you ever want to run a database instance on a Leaf node, it needs a database home as any other node.
Extend public network to Leaf(s) For RAC Reader Nodes use case only, enable a public network connec@on on Mars by extending the network and listener resources to the leaf.
Run a Database Instance on a Leaf Node – Prepara@on
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
Create a Policy-‐Managed RAC DB RAC Reader Nodes as well as Massive Parallel Query RAC require a Policy-‐Managed database. Admin-‐managed DBs cannot be extended to Leafs.
16
For Massive Parallel Query RAC, create new server pools along
with the database. Make sure to create a
“Parallel Query Server Pool”.
For RAC Reader Nodes, Create database on HUB nodes
the addi@on of database instances on Leaf nodes is dynamic and managed via command line.
Run a Database Instance on a Leaf Node – DB Crea@on
Serverpool OLTP was pre-‐created using the oracle user.
Policy management allows for an easy re-‐assignment of a Leaf nodes to other tasks.
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
For RAC Reader Nodes, add a “Reader Farm” (RF) pool to the system using the “add service” command (dynamic).
17
Summary
Connect
Note that if a Leaf node is used for Massive Parallel Query RAC, it should not allow for direct connec@ons to
the Leaf node instance.
Run RAC Reader Nodes – Finaliza@on
(Re-‐)star@ng the OLTPWL Service
finalizes the DWHWL service setup.
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
Oracle Database In-‐Memory – ideal for RAC Reader Nodes
18
It’s easy to run Oracle Database In-‐Memory on Leaf Nodes:
alter system set
inmemory_size=100M scope=spfile sid=‘*’;
Emphasizing Leaf Node Usage
by using instance-‐specific senngs is “work in progress”
Run Oracle Database In-‐Memory on Leaf Node Instances
The IMDB Colum Store will be ac@vated aoer instance restart.
A min. 100MB Column Store size
is required.
Column Stores need to be equally sized across all instances.
select INST_ID, pool, alloc_bytes, alloc_bytes, used_bytes from GV$INMEMORY_AREA;
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. | 19
Oracle Database 12c Rel. 2 Real Applica@on Clusters (RAC) It’s all about scalability, availability and efficient management
BePer availability (due to reduced
reconfigura@on @mes)
BePer scalability (for singleton services)
Efficient management for large scale deployments
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
Improved availability for all-‐HUB, Standalone Clusters
20
Flex Cluster-‐based availability – here: Node WeighKng
Availability due to Autonomous Health Framework conKnuously working for you
Oracle RAC 12c Rel. 2 Three Dimensions of Availability
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
Availability due to Autonomous Health Framework conKnuously working for you
Flex Cluster-‐based availability – here: Node WeighKng
Improved availability for all-‐HUB, Standalone Clusters
21
Oracle RAC 12c Rel. 2 Three Dimensions of Availability
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
Pluggable Database and Service IsolaKon improves availability by ensuring that instance failures of instances only hos@ng singleton PDBs will not impact other
instances of the same RAC-‐based CDB.
22
Near Zero DownKme Reconfig. via “Buddy Instances” which track modified data blocks on other nodes to quickly iden@fy blocks requiring recovery, which allows for rapid processing of new transac@ons in case recovery is needed.
Op@mized (Singleton) Reconfigura@on Time
4x faster
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
Availability due to Autonomous Health Framework conKnuously working for you
Improved availability for all-‐HUB, Standalone Clusters
23
Flex Cluster-‐based availability – here: Node WeighKng
Oracle RAC 12c Rel. 2 Three Dimensions of Availability
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. | 24
• Pre-‐12.2, node evic@on follows a rather “ignorant” paPern – Example in a 2-‐node cluster: The node with the lowest node number survives.
• Customers must not base their applica@on logic on which node survives the split brain. – As this may(!) change in future releases
Node Evic@on Basics h"p://www.slideshare.net/MarkusMichalewicz/oracle-‐clusterware-‐node-‐management-‐and-‐voKng-‐disks
✔ 1 2
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. | 25
• Node Weigh@ng is a new feature that considers the workload hosted in the cluster during fencing
• The idea is to let the majority of work survive, if everything else is equal – Example: In a 2-‐node cluster, the node hos@ng the
majority of services (at fencing @me) is meant to survive
Node Weigh@ng in Oracle RAC 12c Release 2 Idea: Everything equal, let the majority of work survive
✔ 1 2
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
A three node cluster will benefit from “Node Weigh@ng”, if three equally sized sub-‐clusters are built as s result of the failure, since two differently sized sub-‐clusters are
not equal.
26
Secondary failure consideraKon can influence which node survives. Secondary failure considera@on will be enhanced successively.
A fallback scheme is applied if considera@ons do not lead to an ac@onable outcome.
Let’s Define “Equal”
✔
Public network card failure. “Conflict”.
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
CSS_CRITICAL can be set on various levels / components to mark them as
“cri@cal” so that the cluster will try to preserve them in case of a failure.
27
CSS_CRITICAL will be honored if no other technical reason prohibits survival of the node which has at least one cri@cal
component at the @me of failure.
A fallback scheme is applied if CSS_CRITICAL senngs do not lead
to an ac@onable outcome.
CSS_CRITICAL – Fencing with Manual Override
crsctl set server css_criKcal {YES|NO}
+ server restart
srvctl modify database -‐help |grep cri@cal
… -‐css_cri@cal {YES | NO}
Define whether the database or service is CSS cri@cal
✔ Node evic@on despite WL; WL will failover.
“Conflict”.
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
• Leaf nodes require at least one Hub node in the cluster to which they can connect.
• If a Hub node fails, all Leaf nodes connected to the failed Hub node re-‐connect to another Hub. – Failover is transparent …
• … on cluster level. • … on the Leaf nodes. • … for instances running on the Leaf nodes.
Last but Not Least – Leaf Node Failover
Earth Venus Oracle GI | HUB Oracle GI | HUB
Oracle RAC Oracle RAC
28
Mars Oracle GI | Leaf
Oracle RAC
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. | 29
Oracle Database 12c Rel. 2 Real Applica@on Clusters (RAC) It’s all about scalability, availability and efficient management
BePer availability (due to reduced
reconfigura@on @mes)
BePer scalability (for singleton services)
Efficient management for large scale deployments
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
gridSetup and zip-‐based install allow for a simple unzip to install
the Grid Home and node management (addNode) thereaoer
30
ASM Management for NFS-‐based Clusterware files for easier management and thereby bePer availability.
Separate Diskgroup for Grid Infrastructure Management
Repository (GIMR) allows for more flexibility during Grid Infrastructure Installa@on
BePer Management Thanks to Your Feedback
$ORACLE_HOME/gridSetup.sh
Using these and other technologies, an average (2-‐4 nodes) cluster can be installed in less than an hour with proper prepara@on.
Configure ASM on NFS
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
What If…You have Hundreds of Environments And what if…
– Sooware Installa@on – Storage configura@on – Diagnos@cs setup … would have to be performed only once and can then be re-‐used mulKple Kmes? … allowing you to save many hours performing these @ring tasks?
31
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
Cluster Domain
Simplifica@on and Efficiency through Centraliza@on
Facilita@ng … – Rapid Home Provisioning (RHP) – Automa@c Storage Management (ASM) – Autonomous Health Framework (AHF)
… hosted on a dedicated cluster – the Domain Services Cluster (DSC) – all three management tasks can be centralized and diagnos@cs can be op@mized for Member Clusters in the Cluster Domain.
32
Only some of the benefits of the new Cluster Domain-‐based management
Domain Services Cluster
RHP AHF
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. | 33
Domain Services Cluster
Cluster Domain
IO Service ASM Service
Database Member Cluster
Uses ASM Service
Database Member Cluster
Uses IO & ASM Service of DSC
Trace File Analyzer (TFA) Service
Mgmt Repository (GIMR) Service
ApplicaKon Member Cluster
GI only
Database Member Cluster
Uses local ASM
Shared ASM
AddiKonal OpKonal Services
Rapid Home Provisioning
(RHP) Service
Private Network SAN NAS
1 2 3 4
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
Configure an Oracle Domain Services Cluster (DSC) as part of the gridSetup-‐based install. A DSC install follows the “Standalone Cluster” install.
34
Create a credenKal file for each Member Cluster you want to
deploy and make it accessible to the server on which you will run the Member Cluster install.
Run gridSetup on the server on which you want to run the Member Cluster install and
provide access to the credenKal file when requested. Then “follow the instruc@ons on the screen.”
How to Create a Cluster Domain
The Member Cluster will use only services
for which it has creden@als.
Configure an Oracle Domain Services Cluster
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
Autonomous Health Framework works more efficiently for you on the DSC, as con@nuous analysis is taken off the produc@on cluster.
35
The DSC is the ideal hos@ng environment for Rapid Home
Provisioning (RHP).
Oracle ASM 12c Rel. 2 based storage consolidaKon is best performed on the DSC, as it enables numerous addi@onal
features and use cases.
Proven Features – Even More Beneficial on the DSC
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. |
Summary • Oracle RAC 12c Rel. 2 provides
– BePer Scalability • Inherently as part of the database • Via making Flex Cluster the standard
– BePer Availability • Inherently as part of the database • On cluster-‐level via Node Weigh@ng
– Efficient Management for large scale deployments • Inherently as part of the the Installer and RHP • Via Cluster Domain-‐based Management
36