how to recover
DESCRIPTION
recoverTRANSCRIPT
How to recover and open the database if the archive log required for recovery is missing.
As part of recovery process, our restore went fine and also were able to re-create controlfile. During
recovery, it asked for Archive logs. We checked with our Unix team for required archivelogs and found out
they don’t have required archive logs.
It was critical for us to recover database because of some project deadline.
Error:
SQL> recover database until cancel using backup controlfile;
ORA-00279: change 9867098396261 generated at 03/21/2008 13:37:44 needed for
thread 1
ORA-00289: suggestion : /arcredo/XSCLFY/log1_648355446_2093.arc
ORA-00280: change 9867098396261 for thread 1 is in sequence #2093
Specify log: {=suggested | filename | AUTO | CANCEL}
cancel
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01195: online backup of file 1 needs more recovery to be consistent
ORA-01110: data file 1: ‘/u100/oradata/XSCLFY/SYSTEM01_SCLFY.dbf’
ORA-01112: media recovery not started
SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01195: online backup of file 1 needs more recovery to be consistent
ORA-01110: data file 1: ‘/u100/oradata/XSCLFY/SYSTEM01_SCLFY.dbf’
After doing some research, I found out one hidden parameter (_ALLOW_RESETLOGS_CORRUPTION=TRUE)
will allow us to open database even though it’s not properly recovered.
We forced open the database by setting the _ALLOW_RESETLOGS_CORRUPTION=TRUE. It allows us to open
database but instance crashed immediately after open. I checked the alert.log file and found out we have
undo tablespace corruption.
Alert log shows below error
Errors in file /u01/XSCLFYDB/admin/XSCLFY/udump/xsclfy_ora_9225.trc:
ORA-00600: internal error code, arguments: [4194], [17], [9], [], [], [], [], []
Tue Mar 25 12:45:55 2008
Errors in file /u01/XSCLFYDB/admin/XSCLFY/bdump/xsclfy_smon_24975.trc:
ORA-00600: internal error code, arguments: [4193], [53085], [50433], [], [], [], [], []
Doing block recovery for file 433 block 13525
Block recovery from logseq 2, block 31 to scn 9867098416340
To resolve undo corruption issue, I changed undo_management to “Manual” in init.ora. Now it allowed us to
open database successfully. Once database was up and running, I created new undo tablespace and dropped
old corrupted undo tablespace. I changed back the undo_management to “Auto” and undo_tablespace to
“NewUndoTablespace”.
It resolved our issue and database was up and running without any issue.
_ALLOW_RESETLOGS_CORRUPTION=TRUE allows database to open without consistency checks. This may
result in a corrupted database. The database should be recreated.
As per Oracle Metalink, there is no 100% guarantee that setting _ALLOW_RESETLOGS_CORRUPTION=TRUE
will open the database. However, once the database is opened, then we must immediately rebuild the
database. Database rebuild means doing the following, namely: (1) perform a full-database export, (2)
create a brand new and separate database, and finally (3) import the recent export dump. This option can be
tedious and time consuming, but once we successfully open the new database, then we expect minimal or
perhaps no data loss at all. Before you try this option, ensure that you have a good and valid backup of the
current database.Solution:
1) Set _ALLOW_RESETLOGS_CORRUPTION=TRUE in init.ora file.
2) Startup Mount
3) Recover database
4) Alter database open resetlogs.
5) reset undo_management to “manual” in init.ora file.
6) startup database
7) Create new undo tablespace
changed undo_management to “AUTO” and undo_tablespace to “NewTablespace”
9) Bounce database.
Transportable tablespace for different datablock size(db block size)Source:In this machine size of database block is 8K.
1) In source machine create one tablespace TTS and create user TTS and assign default tablespace to TTS user as TTS tablespace.
2)exec dbms_tts.transport_set_check(‘TTS’);
3)Check for violations using
Select * from transport_set_violations;
4)keep tablespace TTS in read only
Alter tablespace TTS read only;
5)$ exp file=tts.dmp TABLESPACES=TTS TRANSPORT_TABLESPACE=Y
Username/password:- sys as sysdba
6)copy dumpfile and datafile to destination machine
Destination:- In this machine size of database block is 4K
Add following parameter in pfile
Db_8K_cache_size=200M
Perform import using below command
[oracle5@fed01 ~]$ imp file=tts.dmp TABLESPACES=TTS TRANSPORT_TABLESPACE=Y DATAF ILES='/u02/oradata/tts/tts01.dbf'
Import: Release 11.2.0.1.0 - Production on Thu Jul 18 04:25:22 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Username: / as sysdba
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export file created by EXPORT:V11.02.00 via conventional path
About to import transportable tablespace(s) metadata...
import done in US7ASCII character set and AL16UTF16 NCHAR character set
. importing SYS's objects into SYS
. importing SYS's objects into SYS
. importing TTS's objects into TTS
. . importing table "TTS1"
. importing SYS's objects into SYS
Import terminated successfully without warnings.
Note If we don’t mention db_8k_cache_size in pfile it will give following error
Export file created by EXPORT:V11.02.00 via conventional path
About to import transportable tablespace(s) metadata...
import done in US7ASCII character set and AL16UTF16 NCHAR character set
. importing SYS's objects into SYS
. importing SYS's objects into SYS
IMP-00017: following statement failed with ORACLE error 29339:
"BEGIN sys.dbms_plugts.beginImpTablespace('TTS',5,'SYS',1,0,8192,1,322450,"
"1,2147483645,8,128,8,0,1,2147483645,8,1197301176,1,4129,321958,NULL,0,0,NUL"
"L,NULL); END;"
IMP-00003: ORACLE error 29339 encountered
ORA-29339: tablespace block size 8192 does not match configured block sizes
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.DBMS_PLUGTS", line 1682
ORA-06512: at "SYS.DBMS_PLUGTS", line 1813
ORA-06512: at line 1
IMP-00000: Import terminated unsuccessfully
[oracle5@fed01 ~]$ oerr ora 29339
29339, 00000, "tablespace block size %s does not match configured block sizes"
// *Cause: The block size of the tablespace to be plugged in or
// created does not match the block sizes configured in the
// database.
// *Action:Configure the appropriate cache for the block size of this
// tablespace using one of the various (db_2k_cache_size,
// db_4k_cache_size, db_8k_cache_size, db_16k_cache_size,
// db_32K_cache_size) parameters.
Block Change Tracking file
RMAN's change tracking feature for incremental backups improves incremental
backup performance by recording changed blocks in each datafile in a change tracking file. If change tracking is enabled, RMAN uses the change trackingfile to identify changed blocks for incremental backup, thus avoiding the need to scan every block in the datafile.
Change tracking is disabled by default, because it introduces some minimal performance overhead on database during normal operations. However, the benefits of avoiding full datafile scans during backup are considerable, especially if only a small percentage of data blocks are changed between backups. If backup strategy involves incremental backups, then we should enable change tracking.
One change tracking file is created for the whole database. By default, the change tracking file is created as an Oracle managed file in DB_CREATE_FILE_DEST. We can also specify the name of the block change tracking file, placing it in any desired location.
Using change tracking in no way changes the commands used to perform incremental backups, and the change trackingfiles themselves generally require little maintenance after initial configuration.
From Oracle 10g, the background process Block Change Tracking Writer (CTWR) will do the job of writing modified block details to block change tracking file.
In a Real Applications Clusters (RAC) environment, the change tracking file must be located on shared storage accessible from all nodes in the cluster.
Oracle saves enough change-tracking information to enable incremental backups to be taken using any of the 8 most recent incremental backups as its parent.
Although RMAN does not support backup and recovery of the change-tracking file itself, if the whole database or a subset needs to be restored and recovered, then recovery has no user-visible effect on change tracking. After the restore and recovery, the change tracking file is cleared, and starts recording block changes again. The next incremental backup after any recovery is able to use change-tracking data.
After enabling change tracking, the first level 0 incremental backup still has to scan the entire datafile, as the changetracking file does not yet reflect the status of the blocks. Subsequent incremental backup that use this level 0 as parent will take advantage of the change tracking file.
Enabling and Disabling Change Tracking
We can enable or disable change tracking when the database is either open or mounted. To alter the change tracking setting, we must use SQL*Plus to connect to the target database with administrator privileges.
To store the change tracking file in the database area, set DB_CREATE_FILE_DEST in the target database. Then issue the following SQL statement to enable change tracking:
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;
We can also create the change tracking file in a desired location, using the following SQL statement:
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '/u02/rman/rman_change_track.f';
The REUSE option tells Oracle to overwrite any existing file with the specified name.
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '/u02/rman/rman_change_track.f' REUSE;
To disable change tracking, use this SQL statement:
SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;
If the change tracking file was stored in the database area, then it will be deleted when we disable change tracking.
Checking Whether Change Tracking is enabledFrom SQL*Plus, we can query V$BLOCK_CHANGE_TRACKING to determine whether change tracking is enabled or not.
SQL> select status from V$BLOCK_CHANGE_TRACKING;
ENABLED => block change tracking is enabled.
DISABLED => block change tracking is disabled.
Query V$BLOCK_CHANGE_TRACKING to display the filename.
SQL> select filename from V$BLOCK_CHANGE_TRACKING;
Moving the Change Tracking FileIf you need to move the change tracking file, the ALTER DATABASE RENAME FILE command updates the control file to refer to the new location.
1. If necessary, determine the current name of the change tracking file:
SQL> SELECT filename FROM V$BLOCK_CHANGE_TRACKING;
/u02/rman/rman_change_track.f
2. Shutdown the database.SQL> SHUTDOWN IMMEDIATE
3. Using host operating system commands, move the change tracking file to its new
location.$ mv /u02/rman/rman_change_track.f /u02/rman_new/rman_change_track.f
4. Mount the database and move the change tracking file to a location that has more space. For example:
SQL> ALTER DATABASE RENAME FILE '/u02/rman/rman_change_track.f' TO '/u02/rman_new/rman_change_track.f';
5. Open the database.SQL> ALTER DATABASE OPEN;
SQL> SELECT filename FROM V$BLOCK_CHANGE_TRACKING;
/u02/rman_new/rman_change_track.f
If you cannot shutdown the database, then you must disable change tracking and re-enable it, at the new location:
SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '/u02/rman_new/rman_change_track.f';
If you choose this method, you will lose the contents of the change tracking file. Until the next time you complete a level 0 incremental backup, RMAN will have to scan the entire file.
Estimating Size of the Change Tracking File on DiskThe size of the change tracking file is proportional to the size of the database and the number of enabled threads of redo. The size is not related to the frequency of updates to the database.
Typically, the space required for block change tracking is approximately 1/30,000 the size of the data blocks to be tracked. The following two factors that may cause the file to be larger than this estimate suggests:
To avoid overhead of allocating space as database grows, the change tracking file size starts at 10MB, and new space is allocated in 10MB increments. Thus, for
any database up to approximately 300GB the file size is no smaller than 10MB, for up to approximately 600GB the file size is no smaller than 20MB, and so on.
For each datafile, a minimum of 320K of space is allocated in the change tracking file, regardless of the size ofthe file. Thus, if you have a large number of relatively small datafiles, the change tracking file is larger than for databases with a smaller number of larger datafiles containing the same data.
What happens during RMAN active duplicate cloning in oracle
In an active duplication process, target database online image copies and archived redo log files were copied through the auxiliary instance service name. So we no need the target database backup.Target database must be in archive log mode.Database duplication process RMAN does the following things
1.Generate the unique DBID for auxiliary database.2.Copy the data files & archived log files from target database to auxiliary database.3.Recreate the new control files for auxiliary database.4.Recreates the online redo log files.5.Restart the auxiliary instance.6.Open the database with RESETLOGS.
Where RAC database stores dataStorage Options for RAC
1--CFS (Cluster File System) – Easy to manage but only available on some platforms. Does not address striping and mirroring.2--RAW – Available on all platforms but difficult to manage. Does not address striping and mirroring.3--NFS – Easy to manage but only available on some platforms. Does not address striping and mirroring.4--ASM (Automatic Storage Management) – Easy to manage, available on ALL platforms, and DOES address striping and mirroring.
CFS (Cluster Filesystems)
The idea of CFS is to basically share file filesystems between nodes.Easy to manage since you are dealing with regular files.CFS is configured on shared storage. Each node must have access to the storage in which the CFS is mounted.NOT available on all platforms. Supported CFS solutions currently:
OCFS on Linux and Windows (Oracle)DBE/AC CFS (Veritas)GPFS (AIX)Tru64 CFS (HP Tru64)Solaris QFS
RAW (character devices)
Hard to manage since you are dealing with character devices and not regular files.Adding and resizing datafiles is not trivial.On some operating systems volume groups need to deactivated before LVs can be manipulated or added.
NFS (Network Filesystem)
NOT available on all platforms. Supported NFS solutions currently:Network ApplianceRedhat LinuxFujitsu PrimeclusterSolaris Suncluster
ASM
Stripes files rather than logical volumes.Enables online disk reconfiguration and dynamic rebalancing.Provides adjustable re balancing speed.Provides redundancy on a file basis.Supports only Oracle files.Is cluster aware.Is automatically installed as part of the base code set
RAC BASICSVolume Manager:
In computer storage, logical volume management or LVM provides a method of allocating space on mass-storage devices that is more flexible than conventional partitioning schemes. In particular, a volume manager can concatenate, stripe together or otherwise combine partitions into larger virtual ones that administrators can re-size or move, potentially without interrupting system use.
The Volume Manager builds virtual devices called volumes on top of physical disks. Volumes are accessed by a UNIX file system, a database, or other applications in the same way physical disk partitions would be accessed. Volumes are composed of other virtual objects that can be manipulated to change the volume's configuration. Volumes and their virtual
components are referred to as Volume Manager objects. Volume Manager objects can be manipulated in a variety of ways to optimize performance, provide redundancy of data, and perform backups or other administrative tasks on one or more physical disks without interrupting applications. As a result, data availability and disk subsystem throughput are improved.
Large file systems require the capacity of several disks, but most file systems must be created on a single device. A hardware RAID device is one solution to this problem. A hardware RAID device appears as a single device while in fact containing several disk drives internally. There are other excellent benefits of hardware RAID, but it is an expensive solution if one simply needs to make many small disks look like a single big disk. Volume managers are the software solution to this problem. A volume manager is typically a mid-level block device driver (often called a volume driver) which makes many disks appear as a single logical disk. In addition to existing in the kernel's block I/O path, a volume manager requires user level programs to configure and manage partitions and volumes. The virtualized storage perspective produced by volume managers is so useful that often all storage, including hardware RAID, is controlled with a volume manager.
A physical disk is the underlying storage device (media), which may or may not be under Volume Manager control. A physical disk can be accessed using a device name such as c#b#t#d#, where c# is the controller, b# is the bus, t# is the target ID, and d# is the disk number.
A physical disk can be divided into one or more partitions. The partition number, or s#, is given at the end of the device name
Storage Area Network and SAN Protocols:
Storage Area Network (SAN) is a high-speed network or subnetwork whose primary purpose is to transfer data between computer and storage systems. A storage device is a machine that contains nothing but a disk or disks for storing data. A SAN consists of a communication infrastructure, which provides physical connections; and a management layer, which organizes the connections, storage elements, and computer systems so that data transfer is secure and robust.
Typically, a storage area network is part of the overall network of computing resources for an enterprise. A storage area network is usually clustered in close proximity to other computing resources but may also extend to remote locations for backup and archival storage. SANs support disk mirroring, backup and restore, archival and retrieval of archived data, data migration from one storage device to another, and the sharing of data among
different servers in a network. SANs can incorporate subnetworks with network-attached storage (NAS) systems.
There are a few SAN technologies available in today's implementations, such as IBM's optical fiber ESCON which is enhanced by FICON architecture, or the newer Fibre Channel technology. High speed Ethernet is also used in the storage Area Network for connection. SCSI and iSCSI are popular technologies used in the Storage Area Network.
SAN's architecture works in a way that makes all storage devices available to all servers on a LAN or WAN. As more storage devices are added to a SAN, they too will be accessible from any server in the larger network. A Storage Area Network can be anything from two servers on a network accessing a central pool of storage devices to several thousand servers accessing many millions of megabytes of storage.
iSCSI: Internet Small Computer System Interface:
Internet Small Computer System Interface (iSCSI) is a TCP/IP-based protocol for establishing and managing connections between IP-based storage devices, hosts and clients, which is called Storage Area Network (SAN). The SAN makes possible to use the SCSI protocol in network infrastructures for high-speed data transfer at the block level between multiple elements of data storage networks.
The architecture of the SCSI is based on the client/server model, which is mostly implemented in an environment where devices are very close to each other and connected with SCSI buses. Encapsulation and reliable delivery of bulk data transactions between initiators and targets through the TCP/IP network is the main function of the iSCSI. iSCSI provides mechanism for encapsulating SCSI commands on an IP network and operates on top of TCP.
For today - SAN (Storage Area Network), the key requirements of data communication are: 1) Consolidation of data storage systems, 2) Data backup, 3) Server clusterization, 4) Replication, 5) Data recovery in emergency conditions. In addition, SAN is likely geographic distribution over multiple LANs and WANs with various technologies. All operations must be conducted in a secure environment and with QoS. iSCSI is designed to perform the above functions in the TCP/IP network safely and with proper QoS.
The iSCSI has four components:
iSCSI Address and Naming Conventions: An iSCSI node is an identifier of SCSI devices (in a network entity) available through the network. Each iSCSI node has a unique iSCSI name (up to 255 bytes) which is formed according to the rules adopted for Internet nodes.
iSCSI Session Management: The iSCSI session consists of a Login Phase and a Full Feature Phase which is completed with a special command.
iSCSI Error Handling: Because of a high probability of errors in data delivery in some IP networks, especially WAN, where the iSCSI can work, the protocol provides a great deal of measures for handling errors.
iSCSI Security: As the iSCSI can be used in networks where data can be accessed illegally, the protocol allows different security methods.
By carrying SCSI commands over IP networks, iSCSI is used to facilitate data transfers over intranets and to manage storage over long distances. iSCSI can be used to transmit data over local area networks (LANs), wide area networks (WANs), or the Internet and can enable location-independent data storage and retrieval.
The protocol allows clients (called initiators(In the relationship between your computer and the storage device, your computer is called an initiator because it initiates the connection to the device, which is called a target.)) to send SCSI commands (CDBs) to SCSI storage devices (targets) on remote servers. It is a storage area network (SAN) protocol, allowing organizations to consolidate storage into data center storage arrays while providing hosts (such as database and web servers) with the illusion of locally attached disks. Unlike traditional Fibre Channel, which requires special-purpose cabling, iSCSI can be run over long distances using existing network infrastructure.
Oracle Clusterware :
Oracle Clusterware is software that enables servers to operate together as if they are one server. Each server looks like any standalone server. However, each server has additional processes that communicate with each other so the separate servers appear as if they are one server to applications and end users. In addition Oracle Clusterware enables the protection of any Oracle application or any other kind of application within a cluster.
A cluster is a group of independent servers used in a network that cooperate as a single system. Clustering is a technique used to create a highly available and easily scalable environment. Cluster software is the software running on each of these servers that provides
the intelligence, which enables the coordinated cooperation of those servers. If one of the cluster servers fails, the work previously running on that server can be restarted on another available server in the cluster.
Clusterware monitors all components like instances and listeners. There are two important components in Oracle clusterware, Voting Disk and OCR (Oracle Cluster Registry). Voting disk and the OCR is created on shared storage during Oracle Clusterware installation process.
OCR File :- Cluster configuration information is maintained in Oracle Cluster Registry file. OCR relies on a distributed shared-cache architecture for optimizing queries against the cluster repository. Each node in the cluster maintains an in-memory copy of OCR, along with an OCR process that accesses its OCR cache.
When OCR client application needs to update the OCR, they communicate through their local OCR process to the OCR process that is performing input/output (I/O) for writing to the repository on disk.
The OCR client applications are Oracle Universal Installer (OUI), SRVCTL, Enterprise Manger (EM), Database Configuration Assistant (DBCA), Database Upgrade Assistant(DBUA), NetCA and Virtual Internet Protocol Configuration assistant (VIPCA). OCR also maintains dependency and status information for application resources defined within CRS, specifically databases, instances, services and node applications.
Note:- The name of the configuration file is ocr.loc and the configuration file variable is ocrconfig.loc
Oracle Cluster Registry (OCR) :- resides on shared storage and maintains information about cluster configuration and information about cluster database. OCR contains information like which database instances run on which nodes and which services runs on which database. The OCR also manages information about processes that Oracle Clusterware controls. The OCR stores configuration information in a series of key-value pairs within a directory tree structure. The OCR must reside on shared disk that is accessible by all of the nodes in your cluster. The Oracle Clusterware can multiplex the OCR and Oracle recommends that you use this feature to ensure cluster high availability.
Note:- You can replace a failed OCR online, and you can update the OCR through supported APIs such as Enterprise Manager, the Server Control Utility (SRVCTL), or the Database Configuration Assistant (DBCA
Voting Disk: - Manages cluster membership by way of a health check and arbitrates cluster ownership among the instances in case of network failures. RAC uses the voting disk to determine which instances are members of a cluster. The voting disk must reside on shared disk. For high availability, Oracle recommends that you have multiple voting disks. The Oracle Clusterware enables multiple voting disks.
There isn’t really any useful data kept in the voting disk. So, if you lose voting disks, you can simply add them back without losing any data. But, of course, losing voting disks can lead to node reboots. If you lose all voting disks, then you will have to keep the CRS daemons down, then only you can add the voting disks
Cache Fusion:
Cache Fusion is disk less cache coherency mechanism in Oracle RAC that provides copies of data blocks directly from one instance’s memory cache (in which that block is available) to other instance (instance which is request for specific data block). Cache Fusion provides single buffer cache (for all instances in cluster) through interconnect.
In Single Node oracle database, an instance looking for data block first checks in cache, if block is not in cache then goes to disk to pull block from disk to cache and return block to client.
In RAC Database there is remote cache so instance should look not only in local cache (cache local to instance) but on remote cache (cache on remote instance). If cache is available in local cache then it should return data block from local cache; if data block is not in local cache, instead of going to disk it should first go to remote cache (remote instance) to check if block is available in local cache (via interconnect)
This is because accessing data block from remote cache is faster than accessing it from disk.
Heart Beat:
A heartbeat is a polling mechanism, similar to a ping, that monitors the availability of other servers in a RAC system. The heartbeat is a type of polling mechanism that is sent over the cluster interconnect to ensure that all RAC nodes are available.
The heartbeat is part of the clusterware node monitoring. When a node does not respond to a heartbeat signal, the instance is assumed to have crashed and it is "evicted"(expelled or quit) from the cluster.
Public IP, Private IP, Virtual IP and DNS SERVER:
Public IP: The public IP address name must be resolvable to the hostname. You can register both the public IP and the VIP address with the DNS. If you do not have a DNS, then you must make sure that both public IP addresses are in the node /etc/hosts file (for all cluster nodes)
Private IP: A private IP address for each node serves as the private interconnect address for internode cluster communication only. Oracle RAC requires "private IP" addresses to manage the CRS, the clusterware heartbeat process and the cache fusion layer
Virtual IP: A public internet protocol (IP) address for each node, to be used as the Virtual IP address (VIP) for client connections. If a node fails, then Oracle Clusterware fails over the VIP address to an available node. This address should be in the/etc/hosts file on any node. The VIP should not be in use at the time of the installation, because this is an IP address that Oracle Clusterware manages. Oracle uses a Virtual IP (VIP) for database access. The VIP must be on the same subnet as the public IP address. The VIP is used for RAC failover (TAF).
DNS SERVER: The Domain Name System (DNS)is a standard technology for managing the names of Web sites and other Internet domains. DNS technology allows you to type names into your Web browser like redshoretech.com and your computer to automatically find that address on the Internet. A key element of the DNS is a worldwide collection of DNS servers.
A DNS server is any computer registered to join the Domain Name System. A DNS server runs special-purpose networking software, features a public IP address, and contains a database of network names and addresses for other Internet hosts.
LUN (LOGICAL UNIT NUMBER):
If suppose we got a large storage array, and requirement is to not allow one server to use all storage spaces, so it need to divided into logical units as LUN(Logical Unit Number). So LUN allow us slice storage array into usable storage chunks and present same to server. LUN basically refer to either a entire physical volume or subset of larger physical disk or volume. LUN represent logical abstraction or you can say virtual layer between physical disk and application. A LUN is scsi concept.
As we know most storage devices use SCSI command set to communicate. In simple words you can say the devices which are connected via SCSI parallel bus are controlled with SCSI command set.
A LUN on a scsi parallel bus is is used to electrically address the devices. Multiple devices appear on single connection because of LUN. So finally I can say for a system admin LUN is a uniquely identifiable storage device.
Overview of Inventory
The inventory is a very important part of the Oracle Universal Installer. This is where OUI keeps all information regarding the products installed on a specific machine.
There are two ypes of inventories
1)Global or Central Inventory:- The Global Inventory records the physical location of Oracle products installed on the machine, such as ORACLE_HOMES (RDBMS and IAS) or JRE. It does not have any information about the detail of patches applied to each ORACLE_HOMEs.
The Global Inventory gets updated every time you install or de-install an ORACLE_HOME on the machine, be it through OUI Installer, Rapid Install, or Rapid Clone.
Note: If you need to delete an ORACLE_HOME, you should always do it through the OUI de-installer in order to keep the Global Inventory synchronized.
If Global Inventory is lost or corrupted we can regenerate it using runInstaller tool .
There will be only one Global Inventory per machine. Its location is defined by the pointer file
Pointer File:-Pointer File is user to know the location of Global Inventory.Location of pointer file is /etc/oraInst.loc. By looking into the contents of pointer file we can know the location of global inventory.
2)Local Inventory:- Local inventory contains information of installed softwares which are specific to single ORACLE_HOME.There is one Local Inventory per ORACLE_HOME. It is physically located inside the ORACLE_HOME at $ORACLE_HOME/inventory and contains the detail of the patch level for that ORACLE_HOME.
The Local Inventory gets updated whenever a patch is applied to the ORACLE_HOME, using OUI.
If the Local Inventory becomes corrupt or is lost, this is very difficult to recover, and may result in having to reinstall the ORACLE_HOME and re-apply all patchsets and patches.
11G RMAN BACKUP BASED CLONING
Source(TARGET) instance information:
Instance name=DLINK
hostname=mydbt4db2
datafile location=/u03/oradata/DLINK, /u02/oradata/DLINK
Redolog file location=/u03/oradata/DLINK
Archive log file location=/u02/oradata/arch
Auxiliary instance (RLINK) information:
Instance name=RLINK
hostname=mydbt4db4
datafile location=/u03/oradata/RLINK, /u02/oradata/RLINK
Redolog file location=/u03/oradata/RLINK
Archive log file location=/u02/oradata/arch_RLINK
1) Create required directory structure at auxiliary side
[oracle@mydbt4db4 admin]$ mkdir -p /u02/oradata/RLINK
[oracle@mydbt4db4 admin]$ mkdir -p /u03/oradata/RLINK
[oracle@mydbt4db4 admin]$ mkdir -p /u02/oradata/arch_RLINK
2) Define ORACLE_SID and other required settings for auxiliary instance
[oracle@mydbt4db4 ~]$ vi .bash_profile
[oracle@mydbt4db4 ~]$ pwd
/home/oracle
[oracle@mydbt4db4 ~]$ . .bash_profile
[oracle@mydbt4db4 ~]$ echo $ORACLE_SID
RLINK
[oracle@mydbt4db4 ~]$
3) Creating initialization Parameter file for the Auxiliary instance(RLINK)
[oracle@mydbt4db4 dbs]$ vi initTLINK.ora
[oracle@mydbt4db4 dbs]$ cat initTLINK.ora
db_name='RLINK'
memory_target=200M
db_block_size=8192
control_files ='/u02/oradata/RLINK/control1.ctl','/u03/oradata/RLINK/control2.ctl'
compatible ='11.2.0'
log_archive_dest=/u02/oradata/arch_RLINK
log_file_name_convert='/u03/oradata/DLINK','/u03/oradata/RLINK'
db_file_name_convert='/u03/oradata/DLINK','/u03/oradata/RLINK','/u02/oradata/DLINK','/u02/oradata/RLINK'
4) Create password file at auxiliary side(TLINK)
[oracle@mydbt4db4 ~]$ cd $ORACLE_HOME/dbs
[oracle@mydbt4db4 dbs]$ orapwd file=orapw$ORACLE_SID password=sys force=y
[oracle@mydbt4db4 dbs]$ ls -ltr orapwR*
-rw-r----- 1 oracle oinstall 1536 Nov 26 16:55 orapwRLINK
[oracle@mydbt4db4 dbs]$
5) Take a backup of the database Source
RMAN> backup database plus archivelog;
Starting backup at 26-NOV-12
current log archived
configuration for DISK channel 2 is ignored
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=133 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
.
.
.
.
Starting Control File and SPFILE Autobackup at 26-NOV-12
piece handle=/u02/oradata/rman_bkp/c-2903807241-20121126-00 comment=NONE
Finished Control File and SPFILE Autobackup at 26-NOV-12
6) Prepare for duplicate by starting the auxiliary instance
[oracle@mydbt4db4 rman_bkp]$ sqlplus '/as sysdba'
SQL*Plus: Release 11.2.0.1.0 Production on Mon Nov 26 08:41:59 2012
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup nomount;
.
7) Copy the backupsets from source to your auxiliary server
If the duplicate is going to happen on different server, move the backup pieces to a new server using commands like ftp,scp etc
[oracle@mydbt4db2 rman_bkp]$ scp * [email protected]:/u02/oradata/RMAN_BKP
8) Connect to the auxiliary instance from RMAN and perform the rman duplicate as follows
rman auxiliary /
Recovery Manager: Release 11.2.0.1.0 - Production on Mon Nov 26 20:41:32 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
connected to auxiliary database: RLINK (not mounted)
RMAN> DUPLICATE DATABASE TO 'RLINK'
2> BACKUP LOCATION '/u02/oradata/RMAN_BKP';
Starting Duplicate Db at 26-NOV-12
contents of Memory Script:
{
sql clone "create spfile from memory";
}
.
.
.
.
.
executing Memory Script
contents of Memory Script:
{
Alter clone database open resetlogs;
}
executing Memory Script
database opened
Finished Duplicate Db at 26-NOV-12
Active duplication cloning using RMAN in 11G
You can create a duplicate database using the RMAN duplicate command. The duplicate database has a different DBID from the source database and functions entirely independently. Starting from 11g you can do duplicate database in 2 ways.
1. Active database duplication2. Backup-based duplication
Active database duplication copies the live target database over the network to the auxiliary destination and then creates the duplicate database. Only difference is that you don't need to have the pre-existing RMAN backups and copies. The duplication work is performed by an auxiliary channel. This channel corresponds to a server session on the auxiliary instance on the auxiliary host.
As part of the duplicating operation, RMAN automates the following steps:
1. Creates a control file for the duplicate database 2. Restarts the auxiliary instance and mounts the duplicate control file 3. Creates the duplicate datafiles and recovers them with incremental backups and archived redo logs. 4. Opens the duplicate database with the RESETLOGS option
For the active database duplication, RMAN does one extra step .i.e. copy the target database datafiles over the network to the auxiliary instance
Source(TARGET) instance information:
Instance name=DLINK
hostname=mydbt4db2
datafile location=/u03/oradata/DLINK, /u02/oradata/DLINK
Redolog file location=/u03/oradata/DLINK
Archive log file location=/u02/oradata/arch
Auxiliary instance information:
Instance name=TLINK
hostname=mydbt4db4
datafile location=/u03/oradata/TLINK, /u02/oradata/TLINK
Redolog file location=/u03/oradata/TLINK
Archive log file location=/u02/oradata/arch_TLINK
Note: Before proceeding with active duplication ,rman should be fully configured on
Source side with or without catalog database.
1 )Define ORACLE_SID and other required settings for auxiliary instnace
[oracle@mydbt4db4 ~]$ vi .bash_profile
[oracle@mydbt4db4 ~]$ . .bash_profile
[oracle@mydbt4db4 ~]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0/dbhome_1
[oracle@mydbt4db4 ~]$ echo $ORACLE_SID
TLINK
[oracle@mydbt4db4 ~]$ echo $TNS_ADMIN
/u01/app/oracle/product/11.2.0/dbhome_1/network/admin
[oracle@mydbt4db4 ~]$
1.b)Create required directory structure at auxiliary side
[oracle@mydbt4db4 admin]$ mkdir -p /u02/oradata/TLINK
[oracle@mydbt4db4 admin]$ mkdir -p /u03/oradata/TLINK
[oracle@mydbt4db4 admin]$ mkdir -p /u02/oradata/arch_TLINK
2) Creating initialization Parameter file for the Auxiliary instance(TLINK)
[oracle@mydbt4db4 dbs]$ cd $ORACLE_HOME/dbs
[oracle@mydbt4db4 dbs]$ vi initTLINK.ora
[oracle@mydbt4db4 dbs]$ more initTLINK.ora
db_name='TLINK'
memory_target=200M
db_block_size=8192
control_files ='/u02/oradata/TLINK/control1.ctl','/u03/oradata/TLINK/control2.ctl'
compatible ='11.2.0'
log_archive_dest=/u02/oradata/arch_TLINK
log_file_name_convert='/u03/oradata/DLINK','/u03/oradata/TLINK'
db_file_name_convert='/u03/oradata/DLINK','/u03/oradata/TLINK','/u02/oradata/DLINK','/u02/oradata/TLINK'
3) Create password file at auxiliary side(TLINK)
[oracle@mydbt4db4 ~]$ cd $ORACLE_HOME/dbs
[oracle@mydbt4db4 dbs]$ orapwd file=orapw$ORACLE_SID password=sys force=y
[oracle@mydbt4db4 dbs]$ ls -ltr orapw*
-rw-r----- 1 oracle oinstall 1536 Nov 26 13:44 orapwTLINK
4) Start auxiliary instance
[oracle@mydbt4db4 dbs]$ sqlplus '/as sysdba'
SQL*Plus: Release 11.2.0.1.0 Production on Mon Nov 26 13:50:00 2012
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup nomount;
ORACLE instance started.
5) Configure listener at auxiliary side
[oracle@mydbt4db4 dbs]$ cd $TNS_ADMIN
[oracle@mydbt4db4 admin]$ vi listener.ora
[oracle@mydbt4db4 admin]$ more listener.ora
TLINK =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = mydbt4db4.redLINKtech.com)(PORT = 1526))
)
)
SID_LIST_TLINK =
(SID_LIST =
(SID_DESC =
(ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1)
(SID_NAME = TLINK)
)
)
[oracle@mydbt4db4 admin]$ lsnrctl start TLINK
6) Configure listener at TARGET(source) side
[oracle@mydbt4db2 admin]$ cd $TNS_ADMIN
[oracle@mydbt4db2 admin]$ vi listener.ora
[oracle@mydbt4db2 admin]$ more listener.ora
DLINK =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = mydbt4db2.redLINKtech.com)(PORT = 1525))
)
)
SID_LIST_DLINK =
(SID_LIST =
(SID_DESC =
(ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1)
(SID_NAME = DLINK)
)
)
[oracle@mydbt4db2 admin]$ lsnrctl start DLINK
7)Create password file at TARGET(DLINK) side
[oracle@mydbt4db2 admin]$ cd $ORACLE_HOME/dbs
[oracle@mydbt4db2 dbs]$ orapwd file=orapw$ORACLE_SID password=sys force=y
[oracle@mydbt4db2 dbs]$ ls -ltr orapw*
-rw-r----- 1 oracle oinstall 1536 Nov 26 02:29 orapwDLINK
[oracle@mydbt4db2 dbs]$
8) Configure tns alias at auxiliary(TLINK) to connect TARGET database(DLINK).
[oracle@mydbt4db4 admin]$ cd $TNS_ADMIN
[oracle@mydbt4db4 admin]$ vi tnsnames.ora
[oracle@mydbt4db4 admin]$ more tnsnames.ora
to_dLINK=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=mydbt4db2.redLINKtech.com)(PORT=1525))
(CONNECT_DATA=
(SID=DLINK)
)
)
to_tLINK=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=mydbt4db4.redLINKtech.com)(PORT=1526))
(CONNECT_DATA=
(SID=TLINK)
)
)
[oracle@mydbt4db4 admin]$ tnsping to_dLINK
TNS Ping Utility for Linux: Version 11.2.0.1.0 - Production on 26-NOV-2012 14:15:40
Copyright (c) 1997, 2009, Oracle. All rights reserved.
Used parameter files:
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=mydbt4db2.redLINKtech.com)(PORT=1525)) (CONNECT_DATA= (SID=DLINK)))
OK (70 msec)
[oracle@mydbt4db4 admin]$ tnsping to_tLINK
TNS Ping Utility for Linux: Version 11.2.0.1.0 - Production on 26-NOV-2012 14:15:51
Copyright (c) 1997, 2009, Oracle. All rights reserved.
Used parameter files:
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=mydbt4db4.redLINKtech.com)(PORT=1526)) (CONNECT_DATA= (SID=TLINK)))
OK (50 msec)
9) Configure tns alias at Target(DLINK) to connect auxiliary instance (TLINK).
[oracle@mydbt4db2 admin]$ cd $TNS_ADMIN
[oracle@mydbt4db2 admin]$
[oracle@mydbt4db2 admin]$ vi tnsnames.ora
[oracle@mydbt4db2 admin]$ more tnsnames.ora
to_tLINK=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=mydbt4db4.redLINKtech.com)(PORT=1526))
(CONNECT_DATA=
(SID=TLINK)
)
)
[oracle@mydbt4db2 admin]$ tnsping to_tLINK
TNS Ping Utility for Linux: Version 11.2.0.1.0 - Production on 26-NOV-2012 02:27:09
Copyright (c) 1997, 2009, Oracle. All rights reserved.
Used parameter files:
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=mydbt4db4.redLINKtech.com)(PORT=1526)) (CONNECT_DATA= (SID=TLINK)))
OK (180 msec)
[oracle@mydbt4db2 admin]$
10. Start RMAN and Connect to the Database Instances
Start RMAN and connect to the source database as TARGET, the duplicate database instance as AUXILIARY, and, if applicable, the recovery catalog database. You can start the RMAN client on any host so long as it can connect to all of the database instances. If the auxiliary instance requires a text-based initialization parameter file, then this file must exist on the same host that runs the RMAN client application.
In this bulletin we are doing duplicate database from the auxiliary server. Look at the example :
[oracle@mydbt4db4 admin]$ rman
Recovery Manager: Release 11.2.0.1.0 - Production on Mon Nov 26 14:53:24 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
RMAN> connect target sys/sys@to_dLINK
connected to target database: DLINK (DBID=2903807241)
RMAN> connect auxiliary sys/sys@to_tLINK
connected to auxiliary database: TLINK (not mounted)
11) Run the DUPLICATE database command
The simplest case is to use active database duplication to duplicate the database to a different host and use the different directory structure. Look at the example :
RMAN> DUPLICATE TARGET DATABASE
2> to 'TLINK'
3> FROM ACTIVE DATABASE;
Starting Duplicate Db at 26-NOV-12
using target database control file instead of recovery catalog
configuration for DISK channel 2 is ignored
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=96 device type=DISK
contents of Memory Script:
{
sql clone "create spfile from memory";
}
executing Memory Script
.
.
.
.
contents of Memory Script:
{
Alter clone database open resetlogs;
}
executing Memory Script
database opened
Finished Duplicate Db at 26-NOV-12
RMAN>
TRANSPORTABLE TABLESPACES in oracle 11G
Source machine;
We will transport tablespaces using EXP and EXPDP commands. So we need our database up and running
SQL> startup;
ORACLE instance started.
Database opened.
SQL> create tablespace tts datafile '/u02/oradata/rmanprod/tts.dbf' size 50M;
Tablespace created.
SQL> create user tts identified by tts default tablespace
SQL> create user ttsuser identified by ttsuser default tablespace tts;
User created.
SQL> grant connect,resource to ttsuser;
Grant succeeded.
SQL> conn ttsuser/ttsuser
Connected.
SQL> create table a(a number);
Table created.
SQL> insert into a values(1);
1 row created.
Commit;
SQL> exec dbms_tts.TRANSPORT_SET_CHECK('TTS');
BEGIN dbms_tts.TRANSPORT_SET_CHECK('TTS'); END;
*
ERROR at line 1:
ORA-25153: Temporary Tablespace is Empty
ORA-06512: at line 1
Note :- If you get ORA-25153 error then you need to assign a default temporary tablespace to database.
SQL> alter database default temporary tablespace tempts;
SQL> exec dbms_tts.TRANSPORT_SET_CHECK('TTS');
PL/SQL procedure successfully completed.
SQL> select * from transport_set_violations;
no rows selected
I f you get any output other than ‘no rows selected’ then we cannot export tablespace , we need to go through that output and we should take appropriate actions.
SQL> alter tablespace tts read only;
Tablespace altered.
SQL>exit.
[oracle@vmrshoret4db5 ~]$ exp file=tts_exp.dmp log=tts_exp.log TRANSPORT_TABLESPACE=Y TABLESPACES='TTS'
Export terminated successfully without warnings.
Now copy dumpfile and datafile to destination using scp command.
[oracle@vmrshoret4db5 ~]$ scp tts_exp.dmp [email protected]:/$HOME
[oracle@vmrshoret4db5 rmanprod]$ scp tts.dbf [email protected]:/u03/oradata/mls/
Now keep tablespace in read write mode
SQL> alter tablespace tts read write;
Tablespace altered.
In destination
SQL> startup;
ORACLE instance started.
SQL> create user ttsuser identified by ttsuser;
User created.
[oracle@vmrshoret4db3 ~]$ imp file=tts_exp.dmp log=tts.log datafiles='/u03/oradata/mls/tts.dbf' TRANSPORT_TABLESPACE=Y TABLESPACES=TTS
Import terminated successfully without warnings.
SQL> select file_name from dba_data_files;
FILE_NAME
-----------------------------------------------------------
/u03/oradata/mls/system01.dbf
/u03/oradata/mls/sysaux01.dbf
/u03/oradata/mls/undotbs01.dbf
/u03/oradata/mls/tempts1.dbf
/u03/oradata/mls/tts.dbf
SQL> select plugged_in,tablespace_name from dba_tablespaces;
PLU TABLESPACE_NAME
--- ------------------------------
NO SYSTEM
NO SYSAUX
NO UNDOTBS1
NO TEMPTS1
YES TTS
Note:-If plugged_in value is yes then that tablespace is transported tablespace.
SQL> conn ttsuser/ttsuser;
Connected.
SQL> select * from a;
AUTOMATIC STORAGE MANAGEMENT (ASM)
AUTOMATIC STORAGE MANAGEMENT (ASM)
Automatic Storage Management (ASM) is a concept to administer Oracle related files by efficiently referencing disks as raw devices or by using the ASMLib software. This article presents the setup details for using either raw devices or ASMLib.
The ASM functionality is an extention of the Oracle Managed Files (OMF) functionality.It includes striping and mirroring to provide storage solutions with high level of redundancy. To use ASM functionality we need to configure an ASM instance(which is apparently not a full database instance,but just the memory structures).
In summary ASM provides the following functionality:
* Manages groups of disks, called disk groups.* Manages disk redundancy within a disk group.* Provides near-optimal I/O balancing without any manual tuning.* Enables management of database objects without specifying mount points and filenames.* Supports large files.
The ASM instance shares the Oracle home with the database instance. If you plan on running multiple database instances on the server the ASM instance should be installed in a separate Oracle home.
ASM can be configured using two methods of creating disks:
a) Using asmlibb) Using rawdevices.
Before we proceed with the installation/configuration of ASM,it is always better to skid through the below contents,and make yourself aware of the same:
1) DISK PARITIONING:
A) To check the paritions available on your machine,use the command –> fdisk -l
B) To parition the current available filesystem,follow the below:
# fdisk /dev/sdaDevice contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabelBuilding a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previouscontent won’t be recoverable.
The number of cylinders for this disk is set to 1305.There is nothing wrong with that, but this is larger than 1024,and could in certain setups cause problems with:1) software that runs at boot time (e.g., old versions of LILO)2) booting and partitioning software from other OSs(e.g., DOS FDISK, OS/2 FDISK)Warning: invalid flag 0×0000 of partition table 4 will be corrected by w(rite)
Command (m for help): nCommand actione extendedp primary partition (1-4)pPartition number (1-4): 1First cylinder (1-1305, default 1):Using default value 1Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305):Using default value 1305
Command (m for help): wThe partition table has been altered!
Calling ioctl() to re-read partition table.Syncing disks.#
c) To delete a current existing partition,use the below
fdisk /dev/sda
and then option “d”
2) RAW DEVICES:
It is important that if you use raw devices,then you are required to register the raw devices in /etc/sysconfig/rawdevices file ,and then you start the raw devices.
Edit the /etc/sysconfig/rawdevices file, adding the following lines.
/dev/raw/raw1 /dev/sda1/dev/raw/raw2 /dev/sda2/dev/raw/raw3 /dev/sda3
NOTE: Ensure that df -h ,won’t show up the above devices.If they are mounted,then these devices can’t be used to create an ASM Instance.
Once,you edit the /etc/sysconfig/rawdevices file,then you can start/enable/stop/disable your rawdevices using the following command:
service rawdevices restart
3) ASM Logs:
Logfile location to monitor issues while ASM creation –> /var/log/oracleasm
*oracleasm is the filename
–> Now that we made ourselves aware of few commands required while creating an ASM instance.Let us proceed with ASM instance creation.
PARTITION DISKS
Both ASMLib and raw devices require the candidate disks to be partitioned before they can be accessed.
Example:
# ls sd*sda sda1 sda2 sdb sdc sdd# fdisk /dev/sdbDevice contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabelBuilding a new DOS disklabel. Changes will remain in memory only,until you decide to write them. After that, of course, the previouscontent won’t be recoverable.
The number of cylinders for this disk is set to 1305.There is nothing wrong with that, but this is larger than 1024,and could in certain setups cause problems with:1) software that runs at boot time (e.g., old versions of LILO)2) booting and partitioning software from other OSs(e.g., DOS FDISK, OS/2 FDISK)Warning: invalid flag 0×0000 of partition table 4 will be corrected by w(rite)
Command (m for help): nCommand actione extendedp primary partition (1-4)pPartition number (1-4): 1First cylinder (1-1305, default 1):Using default value 1Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305):Using default value 1305
Command (m for help): wThe partition table has been altered!
Calling ioctl() to re-read partition table.Syncing disks.
#The remaining disks (“/dev/sdc” and “/dev/sdd”) must be partitioned in the same way.
NOTE : Ensure that after paritioning you use either asmlib to create disks or create rawdevices and assign them as ASM disks later using DBCA.
ASMLIB INSTALLATION
This step is only necessary if you want to use ASMLib to access the ASM disks.
Determine your kernel version using the following command as the root user.
# uname -r2.6.9-34.ELsmp#Download the ASMLib software from the OTN website, making sure you pick the version that matches your distribution, kernel and architecture. For this example I used CentOS 4.3, so the following packages were required:• oracleasm-support-2.0.1-1.i386.rpm• oracleasmlib-2.0.1-1.i386.rpm• oracleasm-2.6.9-34.ELsmp-2.0.1-1.i686.rpmInstall the packages as the root user.
#Install the packages with the below commands:
rpm -Uvh oracleasm-support-2.0.1-1.i386.rpmrpm -Uvh oracleasmlib-2.0.1-1.i386.rpmrpm -Uvh oracleasm-2.6.9-34.ELsmp-2.0.1-1.i686.rpm
#To check what packages/drivers are installed on your system, use the below command:
rpm –qa | grep –i oracleasm
With the software installed, configure the ASM kernel module.
# /etc/init.d/oracleasm configureConfiguring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM librarydriver. The following questions will determine whether the driver isloaded on boot and what permissions it will have. The current valueswill be shown in brackets (‘[]‘). Hitting without typing ananswer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: oracleDefault group to own the driver interface []: oinstallStart Oracle ASM library driver on boot (y/n) [n]: yFix permissions of Oracle ASM disks on boot (y/n) [y]:Writing Oracle ASM library driver configuration: [ OK ]Creating /dev/oracleasm mount point: [ OK ]Loading module “oracleasm”: [ OK ]
Mounting ASMlib driver filesystem: [ OK ]Scanning system for ASM disks: [ OK ]#Once the kernel module is loaded, stamp (or label) the partitions created earlier as ASM disks.
# /etc/init.d/oracleasm createdisk VOL1 /dev/sda1Marking disk “/dev/sdb1″ as an ASM disk: [ OK ]# /etc/init.d/oracleasm createdisk VOL2 /dev/sdb1Marking disk “/dev/sdc1″ as an ASM disk: [ OK ]# /etc/init.d/oracleasm createdisk VOL3 /dev/sdc1Marking disk “/dev/sdd1″ as an ASM disk: [ OK ]#If this were a RAC installation, the disks would only be stamped by one node. The other nodes would just scan for the disks.
# /etc/init.d/oracleasm scandisksScanning system for ASM disks: [ OK ]#The stamped disks are listed as follows.
# /etc/init.d/oracleasm listdisksVOL1VOL2VOL3#The disks are now ready to be used by ASM.
To administer the Automatic Storage Management library driver and disks, use the oracleasm initialization script with different options, as follows:
# /etc/init.d/oracleasm configure
# /etc/init.d/oracleasm enable
# /etc/init.d/oracleasm restart
# /etc/init.d/oracleasm createdisk DISKNAME devicename
# /etc/init.d/oracleasm deletedisk DISKNAMECaution: Do not use this command to unmark disks that are being used by an Automatic Storage Management disk group. You must drop the disk from the Automatic Storage Management disk group before you unmark it.
querydisk
Use the querydisk option to determine whether a disk device or disk name is being used by the Automatic Storage Management library driver:# /etc/init.d/oracleasm querydisk {DISKNAME | devicename}listdisks
Use the listdisks option to list the disk names of marked Automatic Storage Management library driver disks:# /etc/init.d/oracleasm listdisksscandisksUse the scandisks option to enable cluster nodes to identify which shared disks have been marked as Automatic Storage Management library driver disks on another node:# /etc/init.d/oracleasm scandisks
Raw Device Setup
This step is only necessary if you want ASM to access the disks as raw devices.
Edit the /etc/sysconfig/rawdevices file, adding the following lines.
/dev/raw/raw1 /dev/sda1/dev/raw/raw2 /dev/sdb1/dev/raw/raw3 /dev/sdc1Restart the rawdevices service using the following command.
root@localhost ~]# service rawdevices restartAssigning devices:/dev/raw/raw1 –> /dev/sda1/dev/raw/raw1: bound to major 8, minor 3/dev/raw/raw2 –> /dev/sdb1/dev/raw/raw2: bound to major 8, minor 7/dev/raw/raw3 –> /dev/sdc1/dev/raw/raw3: bound to major 8, minor 8done[root@localhost ~]#
Run the following commands and add them the /etc/rc.local file.
chown oracle:oinstall /dev/raw/raw1chown oracle:oinstall /dev/raw/raw2chown oracle:oinstall /dev/raw/raw3chmod 600 /dev/raw/raw1chmod 600 /dev/raw/raw2chmod 600 /dev/raw/raw3
The ASM raw device disks are now configured.
ASM CREATION:
Creation of the ASM instance is the same, regardless of the use of ASMLib or raw devices. When using ASMLib, the candidate disks are listed using the stamp associated with them, while the raw devices are listed using their device name.
To configure an ASM instance, start the Database Configuration Assistant by issuing the “./runInstaller” command as the oracle user. On the “Welcome” screen, click the “Next” button.
Select the “Advanced Installation”, then click the “Next” Button.
Select the “Configure Automatic Storage Management” option, then click the “Next” Button.
On the next page, when using raw devices, the candidate discs are listed using the device names. So check the devices and redundancy (High, normal and external) as per your requirement.
click on “change disk discovery path” and search with string –> /dev/raw/* (Only for linux)
On the “ASM Disk Groups” screen. Click the “Finish” button.
Click the “Yes” button to perform another operation.
You are now ready to create a database instance using ASM.
ASM Instance creation:
After you configure the kernel parameters and Bash profile, performing the following.
$ export ORACLE_SID=+ASM$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.1.0 – Production on Sat Jul 6 14:01:06 2011
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – ProductionWith the Partitioning, OLAP and Data Mining options
SQL> alter system register;
System altered.
SQL> exitDisconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Pr oductionWith the Partitioning, OLAP and Data Mining options$
DATABASE CREATION:
Go back to the DBCA and create a custom database in the normal way, selecting the “Automatic Storage Management (ASM)” storage option.
· Enter the ASM password if prompted, then click the “OK” button.
· Select the “DATA” disk group, then clicking the “Next” button.
· Accept the default “Oracle-Managed Files” database location by clicking the “Next” button.
· Enable the “Flash Recovery Area” and Archiving, using the “+DATA” disk group for both.
Continue with the rest of the DBCA, selecting the required options along the way.To create an ASM instance first create a file called init+ASM.ora in the /tmp directory containing the following information.
INSTANCE_TYPE=ASM
Next, using SQL*Plus connect to the ide instance.
export ORACLE_SID=+ASMsqlplus / as sysdba
Create an spfile using the contents of the init+ASM.ora file.
SQL> CREATE SPFILE FROM PFILE=’/tmp/init+ASM.ora’;
File created.
Finally, start the instance with the NOMOUNT option.
SQL> startup nomountASM instance started
Total System Global Area 125829120 bytesFixed Size 1301456 bytesVariable Size 124527664 bytesDatabase Buffers 0 bytesRedo Buffers 0 bytesSQL>
The ASM instance is now ready to use for creating and mounting disk groups. To shutdown the ASM instance issue the following command.
SQL> shutdownASM instance shutdownSQL>
Once an ASM instance is present disk groups can be used for the following parameters in database instances (INSTANCE_TYPE=RDBMS) to allow ASM file creation:
* DB_CREATE_FILE_DEST* DB_CREATE_ONLINE_LOG_DEST_n* DB_RECOVERY_FILE_DEST* CONTROL_FILES* LOG_ARCHIVE_DEST_n* LOG_ARCHIVE_DEST* STANDBY_ARCHIVE_DEST
What to do if my Global Inventory is corrupted ?
What to do if my Global Inventory is corrupted ?No need to worry if your global Inventory is corrupted, you can recreate global Inventory on machine using Universal Installer and attach already Installed oracle home by option
-attachHome
./runInstaller -silent -attachHome -invPtrLoc $location_to_oraInst.locORACLE_HOME=”Oracle_Home_Location” ORACLE_HOME_NAME=”Oracle_Home_Name”CLUSTER_NODES=”{}”
Basics of PL/SQL required for a DBA
1. What is PL/SQL ?PL/SQL is a procedural language that has both interactive SQL and procedural programming languageconstructs such as iteration, conditional branching.2. What is the basic structure of PL/SQL ?PL/SQL uses block structure as its basic structure. Anonymous blocks or nested blocks can be used in PL/SQL.3. What are the components of a PL/SQL block ?A set of related declarations and procedural statements is called block.4. What are the components of a PL/SQL Block ?Declarative part, Executable part and Execption part.5. What are the datatypes a available in PL/SQL ?Some scalar data types such as NUMBER, VARCHAR2, DATE, CHAR, LONG, BOOLEAN.Some composite data types such as RECORD & TABLE.6. What are % TYPE and % ROWTYPE ? What are the advantages of using these over datatypes?% TYPE provides the data type of a variable or a database column to that variable.% ROWTYPE provides the record type that represents a entire row of a table or view or columns selected inthe cursor.The advantages are : I. Need not know about variable's data typeii. If the database definition of a column in a table changes, the data type of a variable changes accordingly.7. What is difference between % ROWTYPE and TYPE RECORD ?% ROWTYPE is to be used whenever query returns a entire row of a table or view.TYPE rec RECORD is to be used whenever query returns columns of differenttable or views and variables.E.g. TYPE r_emp is RECORD (eno emp.empno% type,ename emp ename %type);e_rec emp% ROWTYPEcursor c1 is select empno,deptno from emp;e_rec c1 %ROWTYPE.8. What is PL/SQL table ?Objects of type TABLE are called "PL/SQL tables", which are modelled as (but not the same as) database tables, PL/SQL tables use a primary PL/SQL tables can have one column and a primary key.
9. What is a cursor ? Why Cursor is required ?Cursor is a named private SQL area from where information can be accessed. Cursors are required to process rows individually for queries returning multiple rows.10. Explain the two type of Cursors ?There are two types of cursors, Implict Cursor and Explicit Cursor.PL/SQL uses Implict Cursors for queries.User defined cursors are called Explicit Cursors. They can be declared and used.11. What are the PL/SQL Statements used in cursor processing ?DECLARE CURSOR cursor name, OPEN cursor name, FETCH cursor name INTO or Record types,CLOSE cursor name.12. What are the cursor attributes used in PL/SQL ?%ISOPEN - to check whether cursor is open or not% ROWCOUNT - number of rows fetched/updated/deleted.% FOUND - to check whether cursor has fetched any row. True if rows are fetched.% NOT FOUND - to check whether cursor has fetched any row. True if no rows are fetched.These attributes are proceded with SQL for Implict Cursors and with Cursor name for Explict Cursors.13. What is a cursor for loop ?Cursor for loop implicitly declares %ROWTYPE as loop index,opens a cursor, fetches rows of values fromactive set into fields in the record and closeswhen all the records have been processed.eg. FOR emp_rec IN C1 LOOPsalary_total := salary_total +emp_rec sal;END LOOP;14. What will happen after commit statement ?Cursor C1 isSelect empno,ename from emp;Beginopen C1; loopFetch C1 intoeno.ename;Exit WhenPage 14 of 259C1 %notfound;-----commit;end loop;end;The cursor having query as SELECT .... FOR UPDATE gets closed after COMMIT/ROLLBACK.The cursor having query as SELECT.... does not get closed even after COMMIT/ROLLBACK.15. Explain the usage of WHERE CURRENT OF clause in cursors ?WHERE CURRENT OF clause in an UPDATE,DELETE statement refers to the latest row fetched from a
cursor.Database Triggers16. What is a database trigger ? Name some usages of database trigger ?Database trigger is stored PL/SQL program unit associated with a specific database table. Usages are Auditdata modificateions, Log events transparently, Enforce complex business rules Derive column valuesautomatically, Implement complex security authorizations. Maintain replicate tables.17. How many types of database triggers can be specified on a table ? What are they ?Insert Update DeleteBefore Row o.k. o.k. o.k.After Row o.k. o.k. o.k.Before Statement o.k. o.k. o.k.After Statement o.k. o.k. o.k.If FOR EACH ROW clause is specified, then the trigger for each Row affected by the statement.If WHEN clause is specified, the trigger fires according to the retruned boolean value.18. Is it possible to use Transaction control Statements such a ROLLBACK or COMMIT in Database Trigger ? Why ?It is not possible. As triggers are defined for each table, if you use COMMIT of ROLLBACK in a trigger, it affects logical transaction processing.19. What are two virtual tables available during database trigger execution ?The table columns are referred as OLD.column_name and NEW.column_name.For triggers related to INSERT only NEW.column_name values only available.For triggers related to UPDATE only OLD.column_name NEW.column_name values only available. For triggers related to DELETE only OLD.column_name values only available.20. What happens if a procedure that updates a column of table X is called in a database trigger of the same table ?Mutation of table occurs.21. Write the order of precedence for validation of a column in a table ?I. done using Database triggers.ii. done using Integarity Constraints.22. What is an Exception ? What are types of Exception ?Exception is the error handling part of PL/SQL block. The types are Predefined and user_defined. Some ofPredefined execptions are.CURSOR_ALREADY_OPENDUP_VAL_ON_INDEXNO_DATA_FOUNDTOO_MANY_ROWSINVALID_CURSORINVALID_NUMBERLOGON_DENIEDNOT_LOGGED_ONPROGRAM-ERRORSTORAGE_ERRORTIMEOUT_ON_RESOURCEVALUE_ERROR
ZERO_DIVIDEOTHERS.23. What is Pragma EXECPTION_INIT ? Explain the usage ?The PRAGMA EXECPTION_INIT tells the complier to associate an exception with an oracle error. To get an error message of a specific oracle error.e.g. PRAGMA EXCEPTION_INIT (exception name, oracle error number)24. What is Raise_application_error ?Raise_application_error is a procedure of package DBMS_STANDARD which allows to issue anuser_defined error messages from stored sub-program or database trigger.25. What are the return values of functions SQLCODE and SQLERRM ?SQLCODE returns the latest code of the error that has occured.SQLERRM returns the relevant error message of the SQLCODE.26. Where the Pre_defined_exceptions are stored ?In the standard package.Procedures, Functions & Packages ;27. What is a stored procedure ?A stored procedure is a sequence of statements that perform specific function.28. What is difference between a PROCEDURE & FUNCTION ?A FUNCTION is alway returns a value using the return statement.A PROCEDURE may return one or more values through parameters or may not return at all.29. What are advantages fo Stored Procedures /Extensibility,Modularity, Reusability, Maintainability and one time compilation.30. What are the modes of parameters that can be passed to a procedure ?IN,OUT,IN-OUT parameters.31. What are the two parts of a procedure ?Procedure Specification and Procedure Body.32. Give the structure of the procedure ?PROCEDURE name (parameter list.....)islocal variable declarationsBEGINExecutable statements.Exception.exception handlersend;33. Give the structure of the function ?FUNCTION name (argument list .....) Return datatype islocal variable declarationsBeginexecutable statementsExceptionexecution handlersEnd;34. Explain how procedures and functions are called in a PL/SQL block ?Function is called as part of an expression.sal := calculate_sal ('a822');
procedure is called as a PL/SQL statementcalculate_bonus ('A822');35. What is Overloading of procedures ?The Same procedure name is repeated with parameters of different datatypes and parameters in different positions, varying number of parameters is called overloading of procedures.e.g. DBMS_OUTPUT put_line36. What is a package ? What are the advantages of packages ?Package is a database object that groups logically related procedures.The advantages of packages are Modularity, Easier Applicaton Design, Information. Hiding,. reusability andBetter Performance.37.What are two parts of package ?The two parts of package are PACKAGE SPECIFICATION & PACKAGE BODY.Package Specification contains declarations that are global to the packages and local to the schema.Package Body contains actual procedures and local declaration of the procedures and cursor declarations.38. What is difference between a Cursor declared in a procedure and Cursor declared in a package specification ?A cursor declared in a package specification is global and can be accessed by other procedures or procedures in a package.A cursor declared in a procedure is local to the procedure that can not be accessed by other procedures.39. How packaged procedures and functions are called from the following?a. Stored procedure or anonymous blockb. an application program such a PRC *C, PRO* COBOLc. SQL *PLUSa. PACKAGE NAME.PROCEDURE NAME (parameters);variable := PACKAGE NAME.FUNCTION NAME (arguments);EXEC SQL EXECUTEb.BEGINPACKAGE NAME.PROCEDURE NAME (parameters)variable := PACKAGE NAME.FUNCTION NAME (arguments);END;END EXEC;c. EXECUTE PACKAGE NAME.PROCEDURE if the procedures does not have anyout/in-out parameters. A function can not be called.40. Name the tables where characteristics of Package, procedure and functions are stored ?User_objects, User_Source and User_error.
Wednesday, January 7, 2009
How to get the Schema/User size
In SQLPLUS run any of the following scripts to find the total size occupied by a particular schema.
1.
SQL> SELECT sum(bytes)/1024/1024 FROM user_segments;
The output above query would be similar as follows:
SUM(BYTES)/1024/1024——————–2552.75
-- OR --
2.
SQL> SELECT tablespace_name, Sum(bytes)/1024/1024 AS total_size_mbFROM dba_segmentsWHERE owner = Upper(&User_Name)GROUP BY owner, rollup(tablespace_name)
-- OR --
3. To be more precise on particular table which excluding index,lob segments etc
SQL> SELECT sum(bytes)/1024/1024/1024 as size_gig, segment_typeFROM dba_segmentsWHERE owner='XXX'GROUP BY segment_type;
Implementing Dataguard on 11g RAC
Creating RAC Standby Database
Configuration Details:
• Primary Host Names are RAC_PRIM01 and RAC_PRIM02
• Standby Host Names are RAC_STDBY01 and RAC_STDBY02
• The primary database is RAC_PRIM
• Virtual Names are RAC_PRIM01-vip, RAC_PRIM02-vip, RAC_STDBY01-vip and RAC_STDBY02-vip
• Both the primary and standby databases use ASM for storage
• The following ASM disk groups are being used +DATA (for data) and +FRA for Recovery/Flashback
• The standby database will be referred to as RAC_STDBY
• Oracle Managed Files will be used.
• ORACLE_BASE is set to /u01/app/oracle
1. Configure Primary and Standby sites
For Better and Simpler configuration of Data Guard, it is recommended that the Primary and Standby machines have exactly the same structure, i.e.
• ORACLE_HOME points to the same mount point on both sites. • ORACLE_BASE/admin points to the same mount point on both sites. • ASM Disk Groups are the same on both sites
2. Install Oracle Software on each site.
• Oracle Clusterware
• Oracle database executables for use by ASM
• Oracle database executables for use by the RDBMS
3. Server Names / VIPs
The Oracle Real Application Clusters 11g virtual server names and IP addresses are used and maintained by Oracle Cluster Ready Services (CRS).
Note: Both short and fully qualified names will exist.
Server Name/Alias/Host Entry Purpose RAC_PRIM01.local Public Host Name (PRIMARY Node 1) RAC_PRIM02.local Public Host Name (PRIMARY Node 2) RAC_STDBY01.local Public Host Name (STANDBY Node 1) RAC_STDBY02.local Public Host Name (STANDBY Node 2) RAC_PRIM01-vip.local Public Virtual Name (PRIMARY Node 1) RAC_PRIM02-vip.local Public Virtual Name (PRIMARY Node 2) RAC_STDBY01-vip.local Public Virtual Name (STANDBY Node 1) RAC_STDBY02-vip.local Public Virtual Name (STANDBY Node 2)
4. Configure Oracle Networking
4.1 Configure Listener on Each Site
Each site will have a listener defined which will be running from the ASM Oracle Home. The following listeners have been defined in this example configuration.
Primary Role Listener_RAC_PRIM01Listener_RAC_PRIM02Listener_RAC_STDBY01Listener_RAC_STDBY02
4.2 Static Registration
Oracle must be able to access all instances of both databases whether they are in an open, mounted or closed state. This means that these must be statically registered with the listener.
These entries will have a special name which will be used to facilitate the use of the Data Guard Broker, discussed later.
4.3 Sample Listener.ora
LISTENER_RAC_STDBY01 = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01-vip)(PORT = 1521)
(IP = FIRST)) ) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01)(PORT = 1521) (IP = FIRST)) ) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC)) ) ) ) SID_LIST_LISTENER_RAC_STDBY01 = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME=RAC_STDBY_dgmgrl.local) (SID_NAME = RAC_STDBY1) (ORACLE_HOME = $ORACLE_HOME) ) )
4.4 Configure TNS entries on each site.
In order to make things simpler the same network service names will be generated on each site. These service names will be called:
Alias Comments RAC_PRIM1_DGMGRL.local Points to the RAC_PRIM instance on RAC_PRIM01 using the service name RAC_PRIM_DGMGRL.local. This can be used for creating the standby database. RAC_PRIM1.local Points to the RAC_PRIM instance on RAC_PRIM01. using the service name RAC_PRIM.local RAC_PRIM2.local Points to the RAC_PRIM instance on RAC_PRIM02 using the service name RAC_PRIM.local RAC_PRIM.local Points to the RAC_PRIM database i.e. Contains all database instances. RAC_STDBY1_DGMGRL.local Points to the RAC_STDBY instance on RAC_STDBY01 using the service name RAC_STDBY1_DGMGRL ** This will be used for the database duplication. RAC_STDBY1.local Points to the RAC_STDBY instance on RAC_STDBY01 using the service name RAC_STDBY.local RAC_STDBY2.local Points to the RAC_STDBY instance on RAC_STDBY02 using the service name
RAC_STDBY.local RAC_STDBY.local Points to the RAC_STDBY database i.e. Contains all the database instances listener_DB_UNIQUE_NAME.local This will be a tns alias entry consisting of two address lines. The first address line will be the address of the listener on Node1 and the second will be the address of the listener on Node 2. Placing both of the above listeners in the address list will ensure that the database automatically registers with both nodes. There must be two sets of entries. One for the standby nodes call listener_RAC_STDBY and one for the primary nodes called listener_RAC_PRIM
Sample tnsnames.ora (RAC_PRIM01)
RAC_PRIM1_DGMGRL.local = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM01-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RAC_PRIM_DGMGRL.local) ) )
RAC_PRIM1.local = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM01-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RAC_PRIM.local) (INSTANCE_NAME = RAC_PRIM1) ) )
RAC_PRIM2.local = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM02-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RAC_PRIM.local) (INSTANCE_NAME = RAC_PRIM2) ) )
RAC_PRIM.local =
(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM01-vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM02-vip)(PORT = 1521)) (LOAD_BALANCE = yes) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RAC_PRIM.local) ) )
RAC_STDBY1_DGMGRL.local = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RAC_STDBY_DGMGRL.local) ) )
RAC_STDBY2.local= (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY02-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RAC_STDBY.local) (INSTANCE_NAME=RAC_STDBY2) ) )
RAC_STDBY1.local= (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RAC_STDBY.local) (INSTANCE_NAME=RAC_STDBY1) ) )
RAC_STDBY.local= (DESCRIPTION =
(ADDRESS_LIST= (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01-vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY02-vip)(PORT = 1521))) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RAC_STDBY.local) ) )
LISTENERS_RAC_PRIM.local= (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM01-vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM02-vip)(PORT = 1521)) )
4.5 Configure ASM on each Site
Certain initialisation parameters are only applicable when a database is running in either a standby or primary database role. Defining ALL of the parameters on BOTH sites will ensure that, if the roles are switched (Primary becomes Standby and Standby becomes the new Primary), then no further configuration will be necessary.
Some of the parameters will however be node-specific; therefore there will be one set of parameters for the Primary site nodes and one for the Standby site nodes.
4.6 Primary Site Preparation
The following initialisation parameters should be set on the primary site prior to duplication. Whilst they are only applicable to the primary site, they will be equally configured on the standby site. Dg_broker_config_file1 Point this to a file within the ASM disk group – Note File need not exist. Dg_broker_config_file2 Point this to a file within the ASM disk group – Note File need not exist. db_block_checksum To enable datablock integrity checking (OPTIONAL) db_block_checking To enable datablock consistency checking (OPTIONAL)
As long as performance implications allow and do not violate existing SLAs it should be mandatory to have db_block_checksum and db_block_checking enabled.
Additionally, the following must also be configured:
Archive Log Mode
The primary database must be placed into archive log mode.
Forced Logging
The standby database is kept up to date by applying transactions on the standby site, which have been recorded in the online redo logs. In some environments that have not previously utilized Data Guard, the NOLOGGING option may have been utilized to enhance database performance. Usage of this feature in a Data Guard protected environment is strongly undesirable. From Oracle version 9.2, Oracle introduced a method to prevent NOLOGGING transactions from occurring. This is known as forced logging mode of the database. To enable forced logging, issue the following command on the primary database:
alter database force logging;
Password File The primary database must be configured to use an external password file. This is generally done at the time of installation. If not, then a password file can be created using the following command:
orapwd file=$ORACLE_HOME/dbs/orapwRAC_PRIM1 password=mypasswd
Before issuing the command ensure that the ORACLE_SID is set to the appropriate instance – in this case RAC_PRIM1.
Repeat this for each node of the cluster.
Also ensure that the initialisation parameter remote_login_passwordfile is set to ‘exclusive’.
As with Oracle11.1 the Orale Net sessions for Redo Transport can alternatively be auhenticated through SSL (see also section 6.2.1 in the Data Guard Concepts manual).
Standby Site Preparation
Initialization Parameter File :
As part of the duplication process a temporary initialisation file will be used. For the purposes of this document this file will be called /tmp/initRAC_PRIM.ora have one line:
db_name=RAC_PRIM
Password File
The standby database must be configured to use a password file. This must be created by copying the password file from the primary site to the standby site and renaming it to reflect the standby instances.
Repeat this for each node of the cluster.
Additionally ensure that the initialisation parameter remote_login_passwordfile is set to xclusive.
Create Audit File Destination
Create a directory on each node of the standby system to hold audit files. mkdir /u01/app/oracle/admin/RAC_STDBY/adump
Start Standby Instance
Now that everything is in place the standby instance needs to be started ready for duplication to commence:
export ORACLE_SID=RAC_STDBY1 sqlplus / as sysdba startup nomount pfile=’/tmp/initRAC_PRIM.ora’
Test Connection
From the primary database test the connection to the standby database using the command:
sqlplus sys/mypasswd@RAC_STDBY_dgmgrl as sysdba
This should successfully connect.
Duplicate the Primary database
The standby database is created from the primary database. In order to achieve this, up to Oracle10g a backup of the primary database needs to be made and transferred to the standby and restored. Oracle RMAN 11g simplifies this process by providing a new method which allows an ‘on the fly’-duplicate to take place. This will be the method used here (the pre-11g method is described in the Appendicies).
From the primary database invoke RMAN using the following command:
export ORACLE_SID=RAC_PRIM1 rman target / auxiliary sys/mypasswd@RAC_STDBY1_dgmgrl
NOTE: If RMAN returns the error “rman: can’t open target” then ensure that ‘ORACLE_HOME/bin’ appears first in the PATH because there exists a Linux utility also named RMAN.
Next, issue the following duplicate command:
duplicate target database for standby from active database spfile set db_unique_name=’RAC_STDBY’ set control_files=’+DATA/RAC_STDBY/controlfile/control01.dbf’ set instance_number=’1’ set audit_file_dest=’/u01/app/oracle/admin/RAC_STDBY/adump’ set remote_listener=’LISTENERS_RAC_STDBY’ nofilenamecheck;
Create an SPFILE for the Standby Database
By default the RMAN duplicate command will have created an spfile for the instance located in $ORACLE_HOME/dbs.
This file will contain entries that refer to the instance names on the primary database. As part of this creation process the database name is being changed to reflect the DB_UNIQUE_NAME for the standby database, and as such the spfile created is essentially worthless. A new spfile will now be created using the contents of the primary database’s spfile.
Get location of the Control File
Before starting this process, note down the value of the control_files parameter from the currently running standby database
Create a text initialization pfile
The first stage in the process requires that the primary databases initialisation parameters be dumped to a text file:
set ORACLE_SID=RAC_PRIM1 sqlplus “/ as sysdba” create pfile=’/tmp/initRAC_STDBY.ora’ from spfile;
Copy the created file ‘/tmp/initRAC_STDBY.ora’ to the standby server.
Edit the init.ora
On the standby server, edit the /tmp/initRAC_STDBY.ora file:
NOTE: Change every occurrence of RAC_PRIM with RAC_STDBY with the exception of the parameter DB_NAME which must NOT change.
Set the control_files parameter to reflect the value obtained in 4.3.8.1 above. This will most likely be +DATA/RAC_STDBY/controlfile/control01.dbf.
Save the changes.
Create SPFILE
Having created the textual initialisation file it now needs to be converted to a spfile and stored within ASM by issuing:
export ORACLE_SID=RAC_STDBY1 sqlplus “/ as sysdba” create spfile=’+DATA/RAC_STDBY/spfileRAC_STDBY.ora’ from pfile= ’/tmp/initRAC_STDBY.ora’
Create Pointer File
With the spfile now being in ASM, the RDBMS instances need to be told where to find it.
Create a file in the $ORACLE_HOME/dbs directory of standby node 1 (RAC_STDBY01 ) called initRAC_STDBY1.ora . This file will contain one line:
spfile=’+DATA/RAC_STDBY/spfileRAC_STDBY.ora’
Create a file in the $ORACLE_HOME/dbs directory of standby node 2 (RAC_STDBY02) called initRAC_STDBY2.ora . This file will also contain one line:
spfile=’ +DATA/RAC_STDBY/spfileRAC_STDBY.ora’
Additionally remove the RMAN created spfile from $ORACLE_HOME/dbs located on standby node 1 (RAC_STDBY01 )
Create secondary control files
When the RMAN duplicate completed, it created a standby database with only one control file. This is not good practice, so the next step in the process is to create extra control files. This is a two-stage process:
1. Shutdown and startup the database using nomount :
shutdown immediate; startup nomount;
2. Change the value of the control_files parameter to ‘+DATA’,’ +FRA’
alter system set control_files=‘+DATA’,’ +FRA’ scope=spfile;
3. Shutdown and startup the database again :
shutdown immediate; startup nomount;
3. Use RMAN to duplicate the control file already present:
export ORACLE_SID=RAC_STDBY1 rman target / restore controlfile from ‘+DATA/RAC_STDBY/controlfile/control01.dbf’
This will create a control file in both the ASM Disk group’s +DATA and +FRA. It will also update the control file parameter in the spfile.
If you wish 3 to have control files simply update the control_files parameter to include the original controlfile as well as the ones just created.
Cluster-enable the Standby Database
The standby database now needs to be brought under clusterware control, i.e. registered with Cluster
Ready Services.
Before commencing, check that it is possible to start the instance on the second standby node (RAC_STDBY02):
export ORACLE_SID=RAC_STDBY2 sqlplus “/ as sysdba” startup mount;
Ensure Server Side Load Balancing is configured
Check whether the init.ora parameter remote_listener is defined in the standby instances.
If the parameter is not present then create an entry in the tnsnames.ora files (of all standby nodes) which has the following format:
LISTENERS_RAC_STDBY.local = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01 -vip.local)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY02-vip.local)(PORT = 1521)) ) )
Then set the value of the parameter remote_listener to LISTENERS_ RAC_STDBY.local.
Register the Database with CRS
Issue the following commands to register the database with Oracle Cluster Ready Services:
srvctl add database –d RAC_STDBY –o $ORACLE_HOME –m local –p “+DATA/RAC_STDBY/spfileRAC_STDBY.ora” –n RAC_PRIM –r physical_standby –s mount srvctl add instance –d RAC_STDBY –i RAC_STDBY1 –n RAC_STDBY01 srvctl add instance –d RAC_STDBY –i RAC_STDBY2 –n RAC_STDBY02
Test
Test that the above has worked by stopping any running standby instances and then starting the database (all instances) using the command:
srvctl start database –d RAC_STDBY
Once started check that the associated instances are running by using the command:
srvctl status database –d RAC_STDBY
Temporary Files
Temporary files associated with a temporary tablespace are automatically created with a standby database.
Create Standby Redo Logs
Standby Redo Logs (SRL) are used to store redo data from the primary databases when the transport is configured using the Logwriter (LGWR), which is the default.
Each standby redo log file must be at least as large as the largest redo log file in the primary database. It is recommended that all redo log files in the primary database and the standby redo logs in the respective standby database(s) be of the same size.
The recommended number of SRLs is :
(# of online redo logs per primary instance + 1) * # of instances .
Whilst standby redo logs are only used by the standby site, they should be defined on both the primary as well as the standby sites. This will ensure that if the two databases change their roles (primary-> standby and standby -> primary) then no extra configuration will be required.
The standby database must be mounted (mount as ‘standby’ is the default) before SRLs can be created.
SRLs are created as follows (the size given below is just an example and has to be adjusted to the current environment):
1. sqlplus ‘ / a sysdba’
2. startup mount
3. alter database add standby logfile SIZE 100M;
NOTE: Standby Redo Logs are also created in logfile groups. But be aware of the fact that group numbers then must be greater than the group numbers which are associated with the ORLs in the primary database. Wrt group numbering Oracle makes no difference between ORLs and SRLs.
NOTE: Standby Redo Logs need to be created on both databases.
The standby database is now created. The next stage in the process concerns enabling transaction synchronisation. There are two ways of doing this:
1. Using SQL Plus
2. Using the Data Guard Broker
Configuring Data Guard using SQL Plus
Configure the Standby Database
The following initialisation parameters need to be set on the standby database:
Parameter Value (RAC_STDBY01 ) Value (RAC_STDBY02) db_unique_name RAC_STDBY db_block_checking TRUE (OPTIONAL) db_block_checksum TRUE (OPTIONAL) log_archive_config dg_config=(RAC_PRIM, RAC_STDBY) log_archive_max_processes 5 fal_client RAC_STDBY1.local RAC_STDBY2.local fal_server ‘RAC_PRIM1.local’, ‘RAC_PRIM2.local’ Standby_file_management Auto log_archive_dest_2 service=RAC_PRIM LGWR SYNC AFFIRM db_unique_name=PRIMARY_RAC_PRIM VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE) log_archive_dest_2 (Max. Performance Mode) service=RAC_PRIM ARCH db_unique_name=PRIMARY_RAC_PRIM VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE)
Configure the Primary Database
The following initialisation parameters need to be set on the primary database:
Parameter Value (RAC_PRIM01 ) Value (RAC_PRIM02)
db_unique_name RAC_PRIM db_block_checking TRUE (OPTIONAL) db_block_checksum TRUE (OPTIONAL) log_archive_config dg_config=(RAC_PRIM, RAC_STDBY) log_archive_max_processes 5 fal_client RAC_PRIM1.local RAC_PRIM2.local fal_server ‘RAC_STDBY1.local’, ‘RAC_STDBY2.local’ standby_file_management Auto Log_archive_dest_2 service=RAC_STDBY LGWR SYNC AFFIRM db_unique_name=RAC_STDBY VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE) Log_archive_dest_2 (Max. Performance Mode) service=RAC_STDBY ARCH db_unique_name=RAC_STDBY VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE
Set the Protection Mode
In order to specify the protection mode, the primary database must be mounted but not opened.
NOTE: The database must be mounted in exclusive mode which effectively means that all RAC instances but one be shutdown and the remaining instance be started with a parameter setting of cluster_database=false.
Once this is the case then the following statement must be issued on the primary site:
If using Maximum Protection mode then use the command:
Alter database set standby database to maximize protection;
If using Maximum Availability mode then use the command:
Alter database set standby database to maximize availability;
If using Maximum Performance mode then use the command:
Alter database set standby database to maximize performance;
Enable Redo Transport & Redo Apply
Enabling the transport and application of redo to the standby database is achieved by the following:
Standby Site
The standby database needs to be placed into Managed Recovery mode. This is achieved by issuing the statement:
Alter database recover managed standby database disconnect;
Oracle 10gR2 introduced Real Time redo apply (SRLs required). Enabling real time apply is achieved by issuing the statement:
alter database recover managed standby database using current logfile disconnect;
Primary Site:
Set:
log_archive_dest_state_2=enable
in the init.ora file or issue via SQLPlus :
alter system set log_archive_dest_state_2=enable
How to change the characterset of Oracle 10g DB
Decide the character set you want to change and check whether new character is superset of old character set
1.SQL> shutdown immediate2.SQL> startup open restrict3.SQL> alter database character set internal_use UTF8; 4.SQL> shutdown immediate5.SQL> startup
RMAN Backup Validation Check
To Test and Check the Integrity for Backups~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The database prevents operations that result in unusable backup files or corrupt restored datafiles. The database server automatically does the following:
* Blocks access to datafiles while they are being restored or recovered
* Allows only one restore operation for each datafile at a time* Ensures that incremental backups are applied in the correct order* Stores information in backup files to allow detection of corruption
To Detect of Logical Block Corruption--------------------------------------
This tests data and index blocks for logical corruption, such as corruption of a row piece or index entry. If RMAN finds logical corruption, then it logs the block in the alert.log
1. Start RMAN in nocatalog mode and connect to your database:
From the operating system prompt issue at database host :
$ rman target / nocatalog
2. From the RMAN> prompt issue the validate command with the "check logical"clause:
The following example shows how to validate a single datafile:
run {allocate channel d1 type disk;backup check logical validate datafile 77;release channel d1;}
If you wish to monitor the progress of RMAN backup validate, you may issue thisquery:
SQL> select sid, serial#, context, sofar, totalwork,round(sofar/totalwork*100,2) "%_complete"from v$session_longopswhere opname like 'RMAN%'and opname not like '%aggregate%'and totalwork != 0and sofar <> totalwork/
3. Once the validate process is complete, you either check the alert log or aview depending on the version of Oracle being used.
Oracle Database 11g Top New Features : Summary
Oracle Database 11g Top New Features : Summary
1) Automatic Diagnostic Repository [ADR]
2) Database Replay
3) Automatic Memory Tuning
4) Case sensitive password
5) Virtual columns and indexes
6) Interval Partition and System Partition
7) The Result Cache
8) ADDM RAC Enhancements
9) SQL Plan Management and SQL Plan Baselines
10) SQL Access Advisor & Partition Advisor
11) SQL Query Repair Advisor
12) SQL Performance Analyzer (SPA) New
13) DBMS_STATS Enhancements
14) The Result Cache
15) Total Recall (Flashback Data Archive)
Note: The above are only top new features, there are other features as well introduced in 11g which will be included subsequently
Oracle 11g Database DBA New Features with brief explanation
==========================================
# Database Capture/replay database workloads :This allows the total database workload to be captured, transferred to a test databasecreate from a backup or standby database, then replayed to test the affects of an upgrade orsystem change. Currently, these are working to a capture performance overhead of 5%,so this will capture real production workloads
# Automatic Memory Tuning:
Automatic PGA tuning was introduced in Oracle 9i. Automatic SGA tuning was alreadyintroduced in Oracle 10g. But In 11g, all memory can be tuned automatically by setting oneparameter. We can literally tell Oracle how much memory it has and it determines howmuch to use for PGA, SGA and OS Processes. Maximum and minimum thresholds can be set
# Interval partitioning for tables :
Interval partitions are extensions to range partitioning. These provide automation forequi-sized range partitions. Partitions are created as metadata and only the start partition ismade persistent. The additional segments are allocated as the data arrives. The additionalpartitions and local indexes are automatically created.
# Feature Based Patching:
All one-off patches will be classified as to which feature they affect. This allows you to easilyidentify which patches are necessary for the features you are using. EM will allow you tosubscribe to a feature based patching service, so EM automatically scans for availablepatches for the features you are using
# RMAN UNDO bypass :
Rman backup can bypass undo. Undo tablespaces are getting huge, but contain lots ofuseless information. Now rman can bypass those types of tablespace. Great for exporting atablespace from backup.
# Virtual columns/indexes :
User can create Virtual index on table. This Virtual index is not visible to optimizer, so it willnot affect performance, Developer can user HINT and see is Index is useful or not.InvisibleIndexesprevent premature use of newly created indexes
# New default audit settings :
Oracle database where general database auditing was "off" by default, logging is intendedto be enabled by default with the Oracle Database 11g beta secure configuration. Notableperformance improvements are planned to be introduced to reduce the performancedegradation typically associated with auditing.
# Case sensitive password :
Passwords are expected to also become case sensitive This and other changes should resultin better protection against password guessing scenarios. Forexample, in addition to limiting the number of failed login attempts to 10 (defaultconfiguration in 10gR2), Oracle 11g beta’s planned default settings should expirepasswords every 180 days, and limit to seven the number of times a user can login with anexpired password before disabling access.
# Faster DML triggers : Create a disabled trigger; specify trigger firing order
# Fine grained access control for Utl_TCP:
in 10g all port are available, now it is controlled.
# Data Guard supports "Flashback Standby"
# New Trigger features
# Partitioning by logical object and automated partition creation.
# LOB's - New high-performance LOB features.
# New Oracle11g Advisors
# Enhanced Read only tables
# Table trigger firing order
# Enhanced Index rebuild online : - Online index build with NO pause to DML.
# No recompilation of dependent objects:- When
A) Columns added to tablesB) Procedures added to packages
# Improved optimizer statistics collection speed
# Online index build with NO pause to DML
# Read only table :-
alter table t read only
alter table t read write
Oracle 11g Database SQL/PL-SQL New Features----------------------------------------------
> Fine Grained Dependency Tracking:
In 11g we track dependencies at the level of element within unit. so that these changes haveno consequence
• Transparent performance improvement•Unnecessary recompilation certainly consumes CPU
create table t(a number)
create view v as select a from t
alter table t add(Unheard_Of number)
select status from User_Objectswhere Object_Name = 'V'- -----VALID
No recompilation of dependent objects when Columns added to tables OR Proceduresadded to packages
> Named and Mixed Notation from SQL:
select fun(P4=>10) from DUAL
In 10g not possible to call function in select statment by passing 4th parameter,but in 11g it is possible
> PL/SQL "continue" keyword - It is same as we read in c/c++ loop
> Support for “super”: It is same "super" in Java.
> Powerfull Regular Expression:
Now we can access data between TAGS like data between tags .........
The new built-in REGEXP_COUNT returns the number of times the pattern is matched in theinput string.
> New table Data Type "simple_integer"
> SQL Performance Analyzer(SPA) :
It is same as Database replay except it not capture all transaction.The SQL PerformanceAnalyzer (SPA) leverages existing Oracle Database 10g SQL tuning components. The SPAprovides the ability to capture a specific SQL workload in a SQL Tuning Set, take aperformance baseline before a major database or system change, make the desired changeto the system, and then replay the SQL workload against the modified database orconfiguration. The before and after performance of the SQL workload can then be comparedwith just a few clicks of the mouse. The DBA only needs to isolate any SQL statements thatare now performing poorly and tune them via the SQL Tuning Advisor
> Caching The Results with /*+ result_cache */ :
select /*+ result_cache */ * from my_table, New for Oracle 11g, the result_cache hint cachesthe result set of a select statement. This is similar to alter table table_name cache,but as youcan adding predicates makes /*+ result_cache */ considerably more powerful by caching asubset of larger tables and common queries.
select /*+ result_cache */ col1, col2, col3 from my_table where colA = :B1
> The compound trigger :
A compound trigger lets you implement actions for each of the table DML timing points in asingle trigger
> PL/SQL unit source can exceeds 32k characters
> Easier to execute table DDL operations online:
Option to wait for active DML operations instead of aborting
> Fast add column with default value:
Does not need to updateall rows to default value.
Oracle 11g Database Backup & Recovery New Features------------------------------------------------
* Enhanced configuration of archive deletion policies Archive can be deleted , if it is notneed DG , Streams Flashback etc When you CONFIGURE an archived log deletion policyapplies to all archiving destinations, including the flash recovery area. BACKUP ... DELETEINPUT and DELETE... ARCHIVELOG use this configuration, as does the flash recovery area.When we back up the recovery area, RMAN can fail over to other archived redo logdestinations if the flash recovery area is inaccessible.
* Configuring backup compression:
In 11g can use CONFIGURE command to choose between the BZIP2 and ZLIB compressionalgorithms for RMAN backups.
* Active Database Duplication:
Now DUPLICATE command is network aware i.e.we can create a duplicate or standbydatabase over the network without taking backup or using old backup.
* Parallel backup and restore for very large files:
RMAN Backups of large data files now use multiple parallel server processes to efficientlydistribute theworkload for each file. This features improves the performance of backups.
* Improved block media recovery performance:
RECOVER command can recover individual data blocks.RMAN take older, uncorrupted blocks from flashback and the RMAN can use these blocks,thereby speeding up block media recovery.
* Fast incremental backups on physical standby database:
11g has included new feature of enable block change tracking on a physical standbydatabase (ALTER DATABASE ENABLE/DISABLE BLOCK CHANGE TRACKING SQL statement).This new 11g feature enables faster incremental backups on a physical standby database
than in previous releases.because RMAN identifywe the changed blocks sincethe lastincremental backup.
11g ASM New Features-----------------------
The new features in Automatic Storage Management (ASM) extend the storagemanagement automation, improve scalability, and further simplify management forOracle Database files.
■ ASM Fast Mirror Resync
A new SQL statement, ALTER DISKGROUP ... DISK ONLINE, can be executedafter a failed disk has been repaired. The command first brings the disk online forwrites so that no new writes are missed. Subsequently, it initiates a copy of all extentsmarked as stale on a disk from their redundant copies.This feature significantly reduces the time it takes to repair a failed diskgroup,potentially from hours to minutes. The repair time is proportional to the number ofextents that have been written to or modified since the failure.
■ ASM Manageability Enhancements
The new storage administration features for ASM manageability include the following:
■ New attributes for disk group compatibilityTo enable some of the new ASM features, you can use two new disk groupcompatibility attributes, compatible.rdbms and compatible.asm. Theseattributes specify the minimum software version that is required to use diskgroups for the database and for ASM, respectively. This feature enablesheterogeneous environments with disk groups from both Oracle Database 10g andOracle Database 11g. By default, both attributes are set to 10.1. You must advancethese attributes to take advantage of the new features.
■ New ASM command-line utility (ASMCMD) commands and options
ASMCMD allows ASM disk identification, disk bad block repair, and backup andrestore operations in your ASM environment for faster recovery.
■ ASM fast rebalance
Rebalance operations that occur while a disk group is in RESTRICTED modeeliminate the lock and unlock extent map messaging between ASM instances in
Oracle RAC environments, thus improving overall rebalance throughput.This collection of ASM management features simplifies and automates storagemanagement for Oracle databases.
■ ASM Preferred Mirror Read
When ASM failure groups are defined, ASM can now read from the extent that isclosest to it, rather than always reading the primary copy. A new initializationparameter, ASM_PREFERRED_READ_FAILURE_GROUPS, lets the ASM administratorspecify a list of failure group names that contain the preferred read disks for each nodein a cluster.
In an extended cluster configuration, reading from a local copy provides a greatperformance advantage. Every node can read from its local diskgroup (failure group),resulting in higher efficiency and performance and reduced network traffic.
■ ASM Rolling Upgrade
Rolling upgrade is the ability of clustered software to function when one or more ofthe nodes in the cluster are at different software versions. The various versions of thesoftware can still communicate with each other and provide a single system image.The rolling upgrade capability will be available when upgrading from OracleDatabase 11g Release 1 (11.1).
This feature allows independent nodes of an ASM cluster to be migrated or patchedwithout affecting the availability of the database. Rolling upgrade provides higheruptime and graceful migration to new releases.
■ ASM Scalability and Performance Enhancements
This feature increases the maximum data file size that Oracle can support to 128 TB.ASM supports file sizes greater than 128 TB in any redundancy mode. This providesnear unlimited capacity for future growth. The ASM file size limits are:
■ External redundancy - 140 PB■ Normal redundancy - 42 PB■ High redundancy - 15 PB
Customers can also increase the allocation unit size for a disk group in powers of 2 upto 64 MB.
■ Convert Single-Instance ASM to Clustered ASM
This feature provides support within Enterprise Manager to convert a non-clusteredASM database to a clustered ASM database by implicitly configuring ASM on allnodes. It also extends the single-instance to Oracle RAC conversion utility to supportstandby databases.
Simplifying the conversion makes it easier for customers to migrate their databasesand achieve the benefits of scalability and high availability provided by Oracle RAC.
■ New SYSASM Privilege for ASM Administration
This feature introduces the new SYSASM privilege to allow for separation of databasemanagement and storage management responsibilities.The SYSASM privilege allows an administrator to manage the disk groups that can beshared by multiple databases. The SYSASM privilege provides a clear separation ofduties from the SYSDBA privilege.
Dropping a Database in 10G
Dropping a Database in 10G Consists of the following steps.
echo $ORACLE_SID sqlplus "/as sysdba" SQL> startup nomount; ORACLE instance started.
Total System Global Area 1077936128 bytes Fixed Size 2034344 bytes Variable Size 427824472 bytes Database Buffers 633339904 bytes Redo Buffers 14737408 bytes SQL> alter database mount exclusive;
Database altered.
SQL> alter system enable restricted session;
System altered.
SQL> select name from v$database;
NAME
--------- TEST SQL> drop database;
Database dropped.
Database is dropped & all related files are deleted automatically.
Please be very cautious while using this syntax in multidatabase environment.
RMAN Backup and Recovery Scenarios
RMAN Backup and Recovery Scenarios
=> Complete Closed Database Recovery. System tablespace is missingIf the system tablespace is missing or corrupted the database cannot be started up so a complete closed database recovery must be performed.Pre requisites: A closed or open database backup and archived logs.1. Use OS commands to restore the missing or corrupted system datafile to its original location, ie:cp -p /user/backup/uman/system01.dbf /user/oradata/u01/dbtst/system01.dbf2. startup mount;3. recover datafile 1;4. alter database open;
=> Complete Open Database Recovery. Non system tablespace is missingIf a non system tablespace is missing or corrupted while the database is open, recovery can be performed while the database remain open.Pre requisites: A closed or open database backup and archived logs.1. Use OS commands to restore the missing or corrupted datafile to its original location, ie:cp -p /user/backup/uman/user01.dbf /user/oradata/u01/dbtst/user01.dbf2. alter tablespace offline immediate;3. recover tablespace ;4. alter tablespace online;
=> Complete Open Database Recovery (when the database is initially closed).Non system tablespace is missingIf a non system tablespace is missing or corrupted and the database crashed,recovery can be performed
after the database is open.Pre requisites: A closed or open database backup and archived logs.1. startup; (you will get ora-1157 ora-1110 and the name of the missing datafile, the database will remain mounted)2. Use OS commands to restore the missing or corrupted datafile to its original location, ie:cp -p /user/backup/uman/user01.dbf /user/oradata/u01/dbtst/user01.dbf3. alter database datafile3 offline; (tablespace cannot be used because the database is not open)4. alter database open;5. recover datafile 3;6. alter tablespace online;
=> Recovery of a Missing Datafile that has no backups (database is open).If a non system datafile that was not backed up since the last backup is missing,recovery can be performed if all archived logs since the creation of the missing datafile exist.Pre requisites: All relevant archived logs.1. alter tablespace offline immediate;2. alter database create datafile ‘/user/oradata/u01/dbtst/newdata01.dbf’;3. recover tablespace ;4. alter tablespace online;If the create datafile command needs to be executed to place the datafile on a location different than the original use:alter database create datafile ‘/user/oradata/u01/dbtst/newdata01.dbf’ as ‘/user/oradata/u02/dbtst/newdata01.dbf’
=> Restore and Recovery of a Datafile to a different location.If a non system datafile is missing and its original location not available, restore can be made to a different location and recovery performed.Pre requisites: All relevant archived logs.
1.Use OS commands to restore the missing or corrupted datafile to the new location, ie:cp -p /user/backup/uman/user01.dbf /user/oradata/u02/dbtst/user01.dbf2. alter tablespace offline immediate;3. alter tablespace rename datafile ‘/user/oradata/u01/dbtst/user01.dbf’ to ‘/user/oradata/u02/dbtst/user01.dbf’;4. recover tablespace ;5. alter tablespace online;
=>Control File RecoveryAlways multiplex your controlfiles. Controlfiles are missing, database crash.Pre requisites: A backup of your controlfile and all relevant archived logs.
1. startup; (you get ora-205, missing controlfile, instance start but database is not mounted)2. Use OS commands to restore the missing controlfile to its original location:cp -p /user/backup/uman/control01.dbf /user/oradata/u01/dbtst/control01.dbfcp -p /user/backup/uman/control02.dbf /user/oradata/u01/dbtst/control02.dbf3. alter database mount;4. recover automatic database using backup controlfile;5. alter database open resetlogs;6. make a new complete backup, as the database is open in a new incarnation and previous archived log are not relevant.
=>Incomplete Recovery, Until Time/Sequence/CancelIncomplete recovery may be necessaire when an archived log is missing, so recovery can only be made until the previous sequence, or when an important object was dropped, and recovery needs to be made until before the object was dropped.Pre requisites: A closed or open database backup and archived logs, the time or sequence that the ‘until’ recovery needs to be performed.1. If the database is open, shutdown abort2. Use OS commands to restore all datafiles to its original locations:cp -p /user/backup/uman/u01/*.dbf /user/oradata/u01/dbtst/cp -p /user/backup/uman/u02/*.dbf /user/oradata/u01/dbtst/cp -p /user/backup/uman/u03/*.dbf /user/oradata/u01/dbtst/cp -p /user/backup/uman/u04/*.dbf /user/oradata/u01/dbtst/etc…3. startup mount;4. recover automatic database until time ‘2004-03-31:14:40:45′;5. alter database open resetlogs;6. make a new complete backup, as the database is open in a new incarnation and previous archived log are not relevant.Alternatively you may use instead of until time, until sequence or until cancel:recover automatic database until sequence 120 thread 1; ORrecover database until cancel;
=>Rman Recovery ScenariosRman recovery scenarios require that the database is in archive log mode, and that backups of datafiles, control files and archived redolog files are made using Rman. Incremental Rman backups may be used also.Rman can be used with the repository installed on the archivelog, or with a recovery catalog that may be installed in the same or other database.Configuration and operation recommendations:Set the parameter controlfile autobackup to ON to have with each backup acontrolfile backup also:
configure controlfile autobackup on;set the parameter retention policy to the recovery window you want to have,ie redundancy 2 will keep the last two backups available, after executing delete obsolete commands:configure retention policy to redundancy 2;Execute your full backups with the option ‘plus archivelogs’ to include your archivelogs with every backup:backup database plus archivelog;Perform daily maintenance routines to maintain on your backup directory the number of backups you need only:crosscheck backup;crosscheck archivelog all;delete noprompt obsolete backup;To work with Rman and a database based catalog follow these steps:1. sqlplus /2. create tablespace repcat;3. create user rcuser identified by rcuser default tablespace repcat temporary tablespace temp;4. grant connect, resource, recovery_catalog_owner to rcuser5. exit6. rman catalog rcuser/rcuser # connect to rman catalog as the rcuser7. create catalog # create the catalog8. connect target / #
=>Complete Closed Database Recovery. System tablespace is missingIn this case complete recovery is performed, only the system tablespace is missing,so the database can be opened without reseting the redologs.1. rman target /2. startup mount;3. restore database;4. recover database;5. alter database open;
=>Complete Open Database Recovery. Non system tablespace is missing,database is up1. rman target /2. sql ‘alter tablespace offline immediate’;3. restore datafile 3;4. recover datafile 3;5. sql ‘alter tablespace online’;
=> Complete Open Database Recovery (when the database is initially closed).Non system tablespace is missingA user datafile is reported missing when tryin to startup the database. The datafile can be turned offline and the database started up. Restore and recovery are performed using Rman. After recovery is performed the datafile can be turned online again.1. sqlplus /nolog2. connect / as sysdba3. startup mount4. alter database datafile ‘’ offline;5. alter database open;6. exit;7. rman target /8. restore datafile ‘’;9. recover datafile ‘’;10. sql ‘alter tablespace online’;
=> Recovery of a Datafile that has no backups (database is up).If a non system datafile that was not backed up since the last backup is missing,recovery can be performed if all archived logs since the creation of the missing datafile exist. Since the database is up you can check the tablespace name and put it offline. The option offline immediate is used to avoid that the update of the datafile header.Pre requisites: All relevant archived logs.1. sqlplus ‘/ as sysdba’2. alter tablespace offline immediate;3. alter database create datafile ‘/user/oradata/u01/dbtst/newdata01.dbf;4. exit5. rman target /6. recover tablespace ;7. sql ‘alter tablespace online’;If the create datafile command needs to be executed to place the datafile on a location different than the original use:alter database create datafile ‘/user/oradata/u01/dbtst/newdata01.dbf’ as ‘/user/oradata/u02/dbtst/newdata01.dbf’
=> Restore and Recovery of a Datafile to a different location. Database is up.If a non system datafile is missing and its original location not available, restore can be made to a different location and recovery performed.Pre requisites: All relevant archived logs, complete cold or hot backup.1. Use OS commands to restore the missing or corrupted datafile to the new location, ie:cp -p /user/backup/uman/user01.dbf /user/oradata/u02/dbtst/user01.dbf2. alter tablespace offline immediate;
3. alter tablespace rename datafile ‘/user/oradata/u01/dbtst/user01.dbf’ to ‘/user/oradata/u02/dbtst/user01.dbf’;4. rman target /5. recover tablespace ;6. sql ‘alter tablespace online’;
=> Control File RecoveryAlways multiplex your controlfiles. If you loose only one controlfile you can replace it with the one you have in place, and startup the Database. If both controlfiles are missing, the database will crash.Pre requisites: A backup of your controlfile and all relevant archived logs. When using Rman alway set configuration parameter autobackup of controlfile to ON. You will need the dbid to restore the controlfile, get it from the name of the backed up controlfile.It is the number following the ‘c-’ at the start of the name.1. rman target /2. set dbid 3. startup nomount;4. restore controlfile from autobackup;5. alter database mount;6. recover database;7. alter database open resetlogs;8. make a new complete backup, as the database is open in a new incarnation and previous archived log are not relevant.Incomplete Recovery, Until Time/Sequence/CancelIncomplete recovery may be necessaire when the database crash and needs to be recovered, and in the recovery process you find that an archived log is missing. In this case recovery can only be made until the sequence before the one that is missing.Another scenario for incomplete recovery occurs when an important object was dropped or incorrect data was committed on it.In this case recovery needs to be performed until before the object was dropped.Pre requisites: A full closed or open database backup and archived logs, the time or sequence that the ‘until’ recovery needs to be performed.1. If the database is open, shutdown it to perform full restore.2. rman target \3. startup mount;4. restore database;5. recover database until sequence 8 thread 1; # you must pass the thread, if a single instance will always be 1.6. alter database open resetlogs;7. make a new complete backup, as the database is open in a new incarnation and previous archived log are not relevant.Alternatively you may use instead of until sequence, until time, ie: ‘2004-12-28:01:01:10′.
How do I find the overall database size?
how many megabytes are allocated to ALL datafiles:
select sum(bytes)/1024/1024 "Meg" from dba_data_files;
To get the size of all TEMP files:select nvl(sum(bytes),0)/1024/1024 "Meg" from dba_temp_files;
To get the size of the on-line redo-logs:select sum(bytes)/1024/1024 "Meg" from sys.v_$log;
Putting it all together into a single query:
select a.data_size+b.temp_size+c.redo_size "total_size"from ( select sum(bytes) data_sizefrom dba_data_files ) a,( select nvl(sum(bytes),0) temp_sizefrom dba_temp_files ) b,( select sum(bytes) redo_sizefrom sys.v_$log ) c/
What are Patches and how to apply patches ?Patching is one of the most common task performed by DBA's in day-to-day life . Here , we will discuss about the various types of patches which are provided by Oracle . Oracle issues product fixes for its software called patches. When we apply the patch to our Oracle software installation, it updates the executable files, libraries, and object files in the software home directory . The patch application can also update configuration files and Oracle-supplied SQL schemas . Patches are applied by using OPatch, a utility supplied by Oracle , OUI or Enterprise Manager Grid Control .
Oracle Patches are of various kinds . Here , we are broadly categorizing it into two groups .
1.) Patchset : 2.) Patchset Updates :
1.) Patchset : A group of patches form a patch set. Patchsets are applied by invoking OUI (Oracle Universal Installer) . Patchsets are generally applied for Upgradation purpose . This results in a version change for our Oracle software, for example, from Oracle Database 11.2.0.1.0 to Oracle Database 11.2.0.3.0. We will cover this issue later .
2.) Patchset Updates : Patch Set Updates are proactive cumulative patches containing recommended bug fixes that are released on a regular and predictable schedule . Oracle has catergaries as :
i.) Critical Patch Update (CPU) now refers to the overall release of security fixes each quarter rather than the cumulative database security patch for the quarter. Think of the CPU as the overarching quarterly release and not as a single patch .
ii.) Patch Set Updates (PSU) are the same cumulative patches that include both the security fixes and priority fixes. The key with PSUs is they are minor version upgrades (e.g., 11.2.0.1.1 to 11.2.0.1.2). Once a PSU is applied, only PSUs can be applied in future quarters until the database is upgraded to a new base version.
iii.) Security Patch Update (SPU) terminology is introduced in the October 2012 Critical Patch Update as the term for the quarterly security patch. SPU patches are the same as previous CPU patches, just a new name . For the database, SPUs can not be applied once PSUs have been applied until the database is upgraded to a new base version.
iv.) Bundle Patches are the quarterly patches for Windows and Exadata which include both the quarterly security patches as well as recommended fixes.
PSUs(PatchSet Updates) or CPUs(Critical Patch Updates) ,SPU are applied via opatch utility.
How to get Oracle Patches :
We obtain patches and patch sets from My Oracle Support (MOS) . The ability to download a specific patch is based on the contracts associated to the support identifiers in our My Oracle Support account. All MOS users are able to search for and view all patches, but we will be prevented from downloading certain types of patches based on our contracts.
While applying Patchset or patchset upgrades , basically there are two entities in the Oracle Database environment
i. ) Oracle Database Softwareii.) Oracle Database
Most of the database patching activities involve, in the following sequence
Update "Oracle Database Software" using './runInstaller' or 'opatch apply' known as "Installation" Tasks.
Update "Oracle Database" (catupgrd.sql or catbundle.sql ...etc) to make it compatible for newly patched "Oracle database Software" known as "Post Installation" tasks.
Patchset OR CPU/PSU (or one-off) patch contains Post Installation tasks to be executed on all Oracle Database instances after completing the Installation tasks. If we are planning to apply a patchset along with required one-off-patches (either CPU or PSU or any other one-off patch), then we can complete the Installation tasks of the Patchset+CPU/PSU/one-off patches at once and
then execute Post Installation tasks of the Patchset+CPU/PSU/one-off patches in the same sequence as they were installed .
This approach minimizes the requirement of database shutdown across each patching activity and simplifies the patching mechanism as two tasks:
Software update and then
Database update.
Here , we will cover the Opatch Utility in details along with example.
OPatch is the recommended (Oracle-supplied) tool that customers are supposed to use in order to apply or rollback patches. OPatch is PLATFORM specific . Release is based on Oracle Universal Installer version . OPatch resides in $ORACLE_HOME/OPatch . OPatch supports the following :
Applying an interim patch.
Rolling back the application of an interim patch.
Detecting conflict when applying an interim patch after previous interim patches have been applied. It also suggests the best options to resolve a conflict .
Reporting on installed products and interim patch.
The patch metadata exist in the inventory.xml and action.xml files exists under<stage_area>/<patch_id>/etc/config/
Inventory .xml file have the following information :
Bug number
Unique Patch ID
Date of patch year
Required and Optional components
OS platforms ID
Instance shutdown is required or not
Patch can be applied online or not
Actions .xml file have the following information .
File name and it location to which it need to be copied
Components need to be re-linked
Information about the optional and required components
Here are steps for applying patches on linux Platform :
1.) Download the required Patches from My Oracle Support (MOS) :
Login to metalink.
Click "Patches & Updates" link on top menu.
On the patch search section enter patch number and select the platform of your database.
Click search.
On the search results page, download the zip file.
2.) Opatch version :Oracle recommends that we use the latest released OPatch , which is available for download from My Oracle Support . OPatch is compatible only with the version of Oracle Universal Installer that is installed in the Oracle home. We can get all Opatch command by using Opatch help command .
3.) Stop all the Oracle services :Before applying Optach , make sure all the Oracle services are down . If they are not down then stop/down the oracle related Services . Let's crosscheck it
$ ps -ef |grep pmonoracle 15871 15484 0 11:20 pts/2 00:00:00 grep pmon
$ ps -ef |grep tnsoracle 15874 15484 0 11:20 pts/2 00:00:00 grep tns
4.) Take Cold Backup : It is highly recommended to backup the software directory which we are patching before performing any patch operation . This applies to Oracle Database or Oracle Grid Infrastructure software installation directories. Take the backup of following
Take the Oracle software directory backup
$ tar -zcvf /u01/app/oracle/product/11.2.0/ohsw-bkp-b4-ptch.tar.gz /u01/app/oracle/product/11.2.0
Take backup of oracle database .
$ tar -zcvf /u01/app/oracle/oradata/dbfl-b4-ptch.tar.gz /u01/app/oracle/oradata Here all the database files are in oradata directory .
Take backup of OraInventary
$ tar -zcvf /u01/app/oraInventary/orinv-b4-ptch.tar.gz /u01/app/oraInventary
5.) Apply OPatches Set our current directory to the directory where the patch is located and then run the OPatch utility by entering the following commands:
$ export PATH=$ORACLE_HOME/OPatch:$PATH:$ opatch apply .
6.) Post Installation :
Once , the Opatch installation completed successfully . Perform the post Installation steps . Startup the oracle database with new patched software and run catbundle.sql scripts which is found in $ORACLE_HOME/rdbms/admin directory .
The catbundle.sql execution is reflected in the dba_registry_history view by a row associated with bundle series PSU.
7.) Finally check the status of patch status : We can check the final status of applied patched new Oracle Home by using the below command .SQL > select * from dba_registry_history order by action_time desc ;
Notes :i.) If we are using a Data Guard Physical Standby database, we must install this patch on both the primary database and the physical standby database .
ii.) While applying patching take care of mount point status .There should be sufficient Space .
Applying CPUJan2012 Patch on 11.2.0.2/Linux(64 bit)
STEPS:-
1. Database Version
2. OS version
3. Download CPUJan2012 patch for 11.2.0.2.0
4. Opatch Version
5. Sessions Status
6. Invalid objects
7. Status of Oracle Services
8. Backup
9. Apply Opatch
10. Post Installation
11. Check the status from registry$history
12. Recompiling Views in Database
1) Database Version
SQL> select * from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
SQL>
2) OS version
oracle-ckpt.com> file /bin/ls
/bin/ls: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.6.9,
dynamically linked (uses shared libs), for GNU/Linux 2.6.9, stripped
oracle-ckpt.com>
3) Download CPUJan2012 patch for 11.2.0.2
4) Opatch Version
To apply CPUJan2012, OPatch utility version 11.2.0.1.0 or later to apply this patch. Oracle recommends that you use
the latest released OPatch 11.2, which is available for download from My Oracle Support patch 6880880 by
selecting the 11.2.0.0.0 release
oracle-ckpt.com> export PATH=/u00/app/oracle/product/11.2.0/OPatch:$PATH
oracle-ckpt.com> opatch lsinventory
Invoking OPatch 11.2.0.1.1
Oracle Interim Patch Installer version 11.2.0.1.1
Copyright (c) 2009, Oracle Corporation. All rights reserved.
Oracle Home : /u00/app/oracle/product/11.2.0
Central Inventory : /u00/app/oraInventory
from : /etc/oraInst.loc
OPatch version : 11.2.0.1.1
OUI version : 11.2.0.2.0
OUI location : /u00/app/oracle/product/11.2.0/oui
Log file location : /u00/app/oracle/product/11.2.0/cfgtoollogs/opatch/opatch2012-03-
03_06-32-39AM.log
Patch history file:
/u00/app/oracle/product/11.2.0/cfgtoollogs/opatch/opatch_history.txt
Lsinventory Output file location :
/u00/app/oracle/product/11.2.0/cfgtoollogs/opatch/lsinv/lsinventory2012-03-03_06-32-
39AM.txt
--------------------------------------------------------------------------------
Installed Top-level Products (1):
Oracle Database 11g 11.2.0.2.0
There are 1 products installed in this Oracle Home.
5) Sessions Status
Check How Many sesion are ACTIVE, If any found Ask Application team to bring down all Applications/Processes.
SQL> select username,count(*) from v$session where username is not nulll group by
username;
USERNAME COUNT(*)
------------------------------ ----------
26
SOTCADM 6
SYS 1
SQL>
6) Invalid objects
SQL> select count(*),object_type from dba_objects where status <> 'VALID' and
OWNER !='PUBLIC' and OBJECT_TYPE!='SYNONYM' group by object_type;
COUNT(*) OBJECT_TYPE
---------- -------------------
38 TRIGGER
2 VIEW
SQL>
7) Status of Oracle Services
oracle-ckpt.com> ps -ef|grep pmon
oracle 8016 30235 0 02:17 pts/0 00:00:00 grep pmon
oracle-ckpt.com> ps -ef|grep tns
oracle 8019 30235 0 02:17 pts/0 00:00:00 grep tns
oracle-ckpt.com>
8 ) Backup
Take Cold Backup of Database & Backup of (ORACLE_HOME & Inventory)
oracle-ckpt.com> tar -zcpvf 11.2.0_Home_Inventory_Backup_$(date +%Y%m%d).tar.gz
/u00/app/oracle/product/11.2.0 /u00/app/oraInventory/
/u00/app/oracle/product/11.2.0/
/u00/app/oracle/product/11.2.0/jdev/
/u00/app/oracle/product/11.2.0/jdev/lib/
/u00/app/oracle/product/11.2.0/jdev/lib/jdev-rt.jar
/u00/app/oracle/product/11.2.0/jdev/lib/javacore.jar
/u00/app/oracle/product/11.2.0/jdev/doc/
/u00/app/oracle/product/11.2.0/jdev/doc/extension/
/u00/app/oracle/product/11.2.0/jdev/doc/extension/extension.xsd
/u00/app/oracle/product/11.2.0/olap/
--- All files related to ORACLE_HOME & Inventory ------
/u00/app/oraInventory/orainstRoot.sh
/u00/app/oraInventory/ContentsXML/
/u00/app/oraInventory/ContentsXML/comps.xml
/u00/app/oraInventory/ContentsXML/libs.xml
/u00/app/oraInventory/ContentsXML/inventory.xml
/u00/app/oraInventory/install.platform
/u00/app/oraInventory/oui/
/u00/app/oraInventory/oui/srcs.lst
oracle-ckpt.com>
9) Apply Opatch
oracle-ckpt.com> export PATH=$ORACLE_HOME/OPatch:$PATH:
oracle-ckpt.com> opatch napply -skip_subset -skip_duplicate
Invoking OPatch 11.2.0.1.1
Oracle Interim Patch Installer version 11.2.0.1.1
Copyright (c) 2009, Oracle Corporation. All rights reserved.
UTIL session
Oracle Home : /u00/app/oracle/product/11.2.0
Central Inventory : /u00/app/oraInventory
from : /etc/oraInst.loc
OPatch version : 11.2.0.1.1
OUI version : 11.2.0.2.0
OUI location : /u00/app/oracle/product/11.2.0/oui
Log file location : /u00/app/oracle/product/11.2.0/cfgtoollogs/opatch/opatch2012-02-
26_02-17-44AM.log
Patch history file:
/u00/app/oracle/product/11.2.0/cfgtoollogs/opatch/opatch_history.txt
Invoking utility "napply"
Checking conflict among patches...
Checking if Oracle Home has components required by patches...
Checking skip_duplicate
Checking skip_subset
Checking conflicts against Oracle Home...
OPatch continues with these patches: 11830776 11830777 12586486 12586487
12586488 12586489 12586491 12586492 12586493 12586494 12586495 12586496
12846268 12846269 13343244 13386082 13468884
Do you want to proceed? [y|n]
y
User Responded with: Y
Running prerequisite checks...
OPatch detected non-cluster Oracle Home from the inventory and will patch the local
system only.
Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u00/app/oracle/product/11.2.0')
Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files affected by the patch 'NApply' for restore. This might take a
while...
Applying patch 11830776...
ApplySession applying interim patch '11830776' to OH '/u00/app/oracle/product/11.2.0'
Backing up files affected by the patch '11830776' for rollback. This might take a
while...
Patching component oracle.sysman.console.db, 11.2.0.2.0...
Updating jar file "/u00/app/oracle/product/11.2.0/sysman/jlib/emCORE.jar" with
"/sysman/jlib/emCORE.jar/oracle/sysman/eml/admin/rep/AdminResourceBundle.class"
Updating jar file "/u00/app/oracle/product/11.2.0/sysman/jlib/emCORE.jar" with
"/sysman/jlib/emCORE.jar/oracle/sysman/eml/admin/rep/AdminResourceBundleID.class"
Updating jar file "/u00/app/oracle/product/11.2.0/sysman/jlib/emCORE.jar" with
"/sysman/jlib/emCORE.jar/oracle/sysman/eml/admin/rep/UserData.class"
Copying file to
"/u00/app/oracle/product/11.2.0/oc4j/j2ee/oc4j_applications/applications/em/em/admin/
rep/editUserSummary.uix"
Patching component oracle.rdbms, 11.2.0.2.0...
Updating archive file "/u00/app/oracle/product/11.2.0/lib/libserver11.a" with
"lib/libserver11.a/qerrm.o"
Updating archive file "/u00/app/oracle/product/11.2.0/lib/libserver11.a" with
"lib/libserver11.a/kspt.o"
Updating archive file "/u00/app/oracle/product/11.2.0/lib/libserver11.a" with
"lib/libserver11.a/qmix.o"
Updating archive file "/u00/app/oracle/product/11.2.0/lib/libserver11.a" with
"lib/libserver11.a/qmxtk.o"
Updating archive file "/u00/app/oracle/product/11.2.0/rdbms/lib/libknlopt.a" with
"rdbms/lib/libknlopt.a/kkxwtp.o"
Copying file to "/u00/app/oracle/product/11.2.0/rdbms/lib/kkxwtp.o"
ApplySession adding interim patch '13468884' to inventory
Verifying the update...
Inventory check OK: Patch ID 13468884 is registered in Oracle Home inventory with
proper meta-data.
Files check OK: Files from Patch ID 13468884 are present in Oracle Home.
Running make for target client_sharedlib
Running make for target client_sharedlib
Running make for target ioracle
The local system has been patched and can be restarted.
UtilSession: N-Apply done.
OPatch succeeded.
oracle-ckpt.com>
10) Post Installation
database instance running on the Oracle home being patched, connect to the database
using SQL*Plus using SYSDBA and run the catbundle.sql script as follows:
oracle-ckpt.com> sqlplus / as sysdba
SQL*Plus: Release 11.2.0.2.0 Production on Sun Feb 26 02:26:39 2012
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> @?/rdbms/admin/catbundle.sql cpu apply
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
Generating apply and rollback scripts...
Check the following file for errors:
/u00/app/oracle/cfgtoollogs/catbundle/
catbundle_CPU_PROD_GENERATE_2012Feb26_02_27_09.log
Apply script: /u00/app/oracle/product/11.2.0/rdbms/admin/catbundle_CPU_PROD_APPLY.sql
Rollback script:
/u00/app/oracle/product/11.2.0/rdbms/admin/catbundle_CPU_PROD_ROLLBACK.sql
PL/SQL procedure successfully completed.
Executing script file...
SQL> COLUMN spool_file NEW_VALUE spool_file NOPRINT
SQL> SELECT '/u00/app/oracle/cfgtoollogs/catbundle/' || 'catbundle_CPU_' || name ||
'_APPLY_' || TO_CHAR(SYSDATE, 'YYYYMonDD_hh24_mi_ss',
'NLS_DATE_LANGUAGE=''AMERICAN''') || '.log' AS spool_file FROM v$database;
SQL> ALTER SESSION SET current_schema = SYS;
Session altered.
SQL> PROMPT Updating registry...
Updating registry...
SQL> INSERT INTO registry$history
2 (action_time, action,
3 namespace, version, id,
4 bundle_series, comments)
5 VALUES
6 (SYSTIMESTAMP, 'APPLY',
7 SYS_CONTEXT('REGISTRY$CTX','NAMESPACE'),
8 '11.2.0.2',
9 4,
10 'CPU',
11 'CPUJan2012');
1 row created.
SQL> COMMIT;
Commit complete.
SQL> SPOOL off
SQL> SET echo off
Check the following log file for errors:
/u00/app/oracle/cfgtoollogs/catbundle/catbundle_CPU_PROD_APPLY_2012Feb26_02_27_12.log
SQL>
11) Check the status from registry$history
12) Compile Invalid objects by executing “utlrp.sql”.
Before Patching
SQL> select count(*),object_type from dba_objects where status <> 'VALID' and
OWNER !='PUBLIC' and OBJECT_TYPE!='SYNONYM' group by object_type;
COUNT(*) OBJECT_TYPE
---------- -------------------
38 TRIGGER
2 VIEW
SQL>
After Patching & Recompile
SQL> select count(*),object_type from dba_objects where status <> 'VALID' and OWNER !
='PUBLIC' and OBJECT_TYPE!='SYNONYM' group by object_type;
COUNT(*) OBJECT_TYPE
---------- -------------------
2 VIEW
SQL>
13) Opatch Status
oracle-ckpt.com> opatch lsinventory|grep 13343244
Patch 13343244 : applied on Sun Feb 26 02:21:14 EST 2012
12419321, 12828071, 13343244, 11724984
oracle-ckpt.com>
Oracle Database 12C Release 1 Installation on Linux
Oracle 12c (Oracle 12.1.0.1) has been released and is available for download . Oracle 12C Installation steps are almost same as that of Oracle 10g and 11g Installations . Oracle 12c is available for 64 bit . Here , we will see step-by-step Installation of Oracle 12C database .
Step 1 : Oracle S/W InstallationWe can download Oracle 12c s/w from e-delivery or from OTN . Below are Link
http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.htmlhttps://edelivery.oracle.com/EPD/Download/get_form?egroup_aru_number=16496132
Step 2 : Hardware Requirements Oracle Recommand the following requirement for installation . RAM = 2GB of RAM or moreSwap = 1.5 times of RAM if RAM less than 2 GB , equal to size of RAM is RAm size is more than 2GBDisk Space = More than 6.4 GB for Enetrprise Edition .Tmp directory = Minimum 1GB of free space
Step 3 : Hardware Verifications [root@server1 ~]# grep MemTotal /proc/meminfoMemTotal: 3017140 kB
[root@server1 ~]# grep SwapTotal /proc/meminfoSwapTotal: 4105420 kB
[root@server1 ~]# df -h /tmpFilesystem Size Used Avail Use% Mounted on/dev/sda1 46G 19G 25G 44% /
[root@server1 ~]# df -h Filesystem Size Used Avail Use% Mounted on/dev/sda1 46G 19G 25G 44% /tmpfs 1.5G 0 1.5G 0% /dev/shm/dev/hdc 3.4G 3.4G 0 100% /media/RHEL_5.3 x86_64 DVD
[root@server1 ~]# free total used free shared buffers cachedMem: 3017140 715376 2301764 0 109776 384096-/+ buffers/cache: 221504 2795636Swap: 4105420 0 4105420
[root@server1 ~]# uname -m x86_64
[root@server1 ~]# uname -a
Linux server1.example.com 2.6.18-128.el5 #1 SMP Wed Dec 17 11:41:38 EST 2008 x86
_64 x86_64 x86_64 GNU/Linux
Step 4 : Packages Verifications
The following packages are required for the Oracle Installation , so make sure all the packages are installed .
make-3.81binutils-2.17.50gcc-4.1.2 (x86_64)gcc-c++-4.1.2 (x86_64)compat-libcap1 (x86_64)compat-libstdc++-33 (x86_64)glibc-2.5-58 (x86_64)glibc-devel-2.5 (x86_64)libgcc-4.1.2 (x86_64) libstdc++-4.1.2 (x86_64) libstdc++-devel-4 (x86_64)
libaio-0.3.106 (x86_64)libaio-devel-0.3 (x86_64) ksh sysstat unixODBC unixODBC-devel
Execute the below command as root to make sure that we have all this rpms installed. If not installed, then download them from appropriate linux site or we will find the package from the Red Hat Enterprise Linux 5 DVD . For example , # rpm -qa | grep glib*
The above command will display all the installed packages, name starting with glib, similarly we can check for all others packages . If any of the above packages are not installed, run the following command:# rpm -ivh </path/to/><version>.i386.rpm
Steps 5 : Kernel ParametersAdd the below kernel Parameters in the /etc/sysctl.conf file
fs.file-max = 6815744kernel.sem = 250 32000 100 128kernel.shmmni = 4096kernel.shmall = 1073741824kernel.shmmax = 4398046511104net.core.rmem_default = 262144net.core.rmem_max = 4194304net.core.wmem_default = 262144net.core.wmem_max = 1048576fs.aio-max-nr = 1048576net.ipv4.ip_local_port_range = 9000 65500
After adding these lines to /etc/sysctl.conf , run the below command as root to make them enabled. # sysctl -p
Step 6 : Edit the /etc/security/limits.conf file
To improve the performance of the software on Linux systems, we must increase the following shell limits for the oracle user . Add the following lines to the /etc/security/limits.conf file :
oracle soft nproc 2047oracle hard nproc 16384oracle soft nofile 1024oracle hard nofile 65536
Where "nproc" is the maximum number of processes available to the user and "nofiles" is the number of open file descriptors.
Step 7 : Create User and Groups
Starting with Oracle Database 12c , we can create new administrative privileges that are more task-specific and less privileged than the OSDBA/SYSDBA system privileges to support specific administrative privileges tasks required for everyday database operation. Users granted these system privileges are also authenticated through operating system group membership .
We do not have to create these specific group names, but during installation we are prompted to provide operating system groups whose members are granted access to these system privileges. we can assign the same group to provide authentication for these privileges, but Oracle recommends that we should provide a unique group to designate each privileges.
i .) The OSDBA group (typically, dba) : This group identifies operating system user accounts that have database administrative privileges (the SYSDBA privilege).
#groupadd -g 501 dba
ii .) The Oracle Inventory Group (oinstall) : This group owns the Oracle inventory that is a catalog of all Oracle software installed on the system. A single Oracle Inventory group is required for all installations of Oracle software on the system.# groupadd -g 502 oinstall
iii .) The OSOPER group for Oracle Database (typically, oper) : This is an optional group. We create this group if we want a separate group of operating system users to have a limited set of database administrative privileges for starting up and shutting down the database (the SYSOPER privilege).
# groupadd -g 503 oper
iv .) The OSBACKUPDBA group for Oracle Database (typically, backupdba) : Create this group if we want a separate group of operating system users to have a limited set of database backup and recovery related administrative privileges (the SYSBACKUP privilege).# groupadd -g 504 backupdba
v .) The OSDGDBA group for Oracle Data Guard (typically, dgdba) : Create this group if we want a separate group of operating sytsem users to have a limited set of privileges to administer and monitor Oracle Data Guard (the SYSDG privilege).# groupadd -g 505 dgdba
vi .) The OSKMDBA group for encyption key management (typically, kmdba) : Create this group if we want a separate group of operating sytem users to have a limited set of privileges for encryption key management such as Oracle Wallet Manager management (the SYSKM privilege).# groupadd -g 506 kmdba
vii .) The OSDBA group for Oracle ASM (typically, asmdba) : The OSDBA group for Oracle ASM can be the same group u sed as the OSDBA group for the database, or we can create a separate OSDBA group for Oracle ASM to provide administrative access to Oracle ASM instances .# groupadd -g 507 asmdba
viii .) The OSASM group for Oracle ASM Administration (typically, asmadmin) : Create this group as a separate group if we want to have separate administration privileges groups for Oracle ASM and Oracle Database administrators. Members of this group are granted the SYSASM system privileges to administer Oracle ASM .# groupadd -g 508 asmoper
ix .) The OSOPER group for Oracle ASM (typically, asmoper) : This is an optional group. Create this group if we want a separate group of operating system users to have a limited set of Oracle instance administrative privileges (the SYSOPER for ASM privilege), including starting up and stopping the Oracle ASM instance . By default , members of the OSASM group also have all privileges granted by the SYSOPER for ASM privilege.# groupadd -g 509 asmadmin
x . ) Create Oracle user :# useradd -u 54321 -g oinstall -G dba,asmdba,backupdba,dgdba,kmdba oracle#passwd oracle<<ORACLE PASSWORD >>
The -u option specifies the user ID. Using this command flag is optional because the system can provide with an automatically generated user ID number. However, Oracle recommends that we should specify a number. We must note the user ID number because we need it during preinstallation.
Step 8 : Creating oracle directories
As per OFA, oracle base directory has the path : /mount_point/app/oracle_sw_owner where mount_point is the mount point directory for the file system that will contain the Oracle software . I have used /u01 for the mount point directory. However, we could choose another mount point directory, such as /oracle or /opt/soft.
# mkdir -p /u01/oracle/product/12.1.0/db_1# chown -R oracle:oinstall /u01# chmod -R 777 /u01
Step 9 : Setting Oracle Enviroment Edit the /home/oracle/.bash_profile file and add following lines:# su - oracle$ vi .bash_profileexport TMP=/tmpexport TMPDIR=$TMPexport ORACLE_BASE=/u01/oracleexport ORACLE_HOME=$ORACLE_BASE/product/12.1.0/db_1export PATH=/usr/sbin:$PATHexport PATH=$ORACLE_HOME/bin:$PATHexport LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/libexport CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
Step 10 : Check firewall and Selinux Make sure Selinux be either disable or permissive . Check "/etc/selinux/config" file and make following changes .
SELINUX=permissiveOnce ,Selinux value is set than restart the server or or run the below command# setenforce Permissive
If Firewall is enabled ,we need to disable it . we can disable by using below command# service iptables stop# chkconfig iptables off
Step 11 : Finally run the runInstaller for Installation of Oracle 12c release 1
Once , runInstaller get initaited , OUI get invoked and rest are interative graphical console .
Click next and proceed forward .
Click on "Yes" button and proceed .
Select "Skip Software Updates" option and click on next button .
Select "Create and configure a database" option and click on next button
Here , I selected the "Desktop Class" option . Click on next button
Enter the Administrative Password and click next
Click on "Yes" option and proceed forward
Click on next button
Make sure all the prerequisite must be successfull and passed .
Summary page displays all the locations and database information . Click next
Oracle Database Installation in process
Execute the configurations scripts from root
Run the scripts from root .
Oracle Database in process
Database creation in process .
Database creation in process .
Database Creation complted .
Installation of Oracle database was successfull .
Finally connected with Oracle 12c database .