Please see my other blog for Oracle EBusiness Suite Posts - EBMentors

Search This Blog

Note: All the posts are based on practical approach avoiding lengthy theory. All have been tested on some development servers. Please don’t test any post on production servers until you are sure.

Saturday, December 25, 2010

Setting the Default Connect String

If you want to avoid putting the connect string every time on your local client you can do the following
-- in unix
-- in windows: set LOCAL key to mydb value in the registry or:
SET LOCAL=HOMEDEV  -- your db connect string

Monday, December 20, 2010

SQLNet Logging/Tracing

Net Logging and Trace
Oracle Net logging and trace are configured in the sqlnet.ora typically found at ORACLE_HOME\network\admin

Sunday, December 19, 2010

Deny Access to DB for Specific IPs

We can configure the sqlnet.ora file to allow and deny access to the database via the validnode checking parmeters.

Password protected listener

Listener Intro:
The Oracle Database Listener is the database server software component that manages the network traffic between the Oracle Database and the client. The Oracle Database Listener listens on a specific network port (default 1521) and forwards network connections to the Database.

Troubleshooting Guide for the CMAN in 10g

These are the most common problems you face when using a 10g CMAN:

Using Connection Manager

Oracle Connection Manager is a proxy server, an intermediate server that forwards connection requests to database servers or to other proxy servers. It has two primary functions:

Wednesday, December 15, 2010

Database Creation 10g Win - non ASM

The following will help you to create the database (non-ASM) on Windows environment
1- Create parameter file, all parameters are self explanatory. Create all the directories as per parameter file in your environment.

Dropping a Database

SQL> shutdown abort;
ORACLE instance shut down.

SQL> startup mount exclusive restrict pfile=D:\FRATEST\FRADB\pfile\initFRADB.ora
ORACLE instance started.

Clear Flash Recovery Area

If you see error in your alert log like below
ORA-19815: WARNING: db_recovery_file_dest_size of 6442450944 bytes is 100.00% used, and has 0 remaining bytes available.

Tuesday, December 14, 2010

ORA-273xx & ORA-04030 Errors

One of our client is facing the errors for ORA-273xx and ORA-04030 repeatedly in his alert/trace logs. After  so many errors instance is crashed and nothing is logged for this crash. This situation is not consistent, some times it  happens otherwise  everything goes smooth. I had been sent to visit the client side for investigations .

Sunday, December 12, 2010

Start Oracle Clusterware 10gR2 Services Manually in Windows

Oracle Clusterware services in a Windows RAC environment should be allowed to start automatically, ideally, with the help of the Oracle Process Manager Service (OPMD).

Node 2 failed after applying OS Patches

Yesterday client applied OS security patches on the Node 2 (Windows 2003 64bit) and restarted the server. After restart there was hang observed for cluster related services.
I made all the Oracle Services to Manual as startup type and issued the following command.

OAS- HTTP Server start failure

Today, Client installed the new patches for OS and after restart the HTTP server failed to start and in logs following error found.

How Check a specific port , is it being used


The port numbers are divided into three ranges: the Well Known Ports, the Registered Ports, and the Dynamic and/or Private Ports.

Saturday, December 11, 2010

DB Monitorng Views - Part 2 (Locks)

Purpose: Shows current locks in the database, with lock type and kill statement for easeness.

DB Monitorng Views - Part 1 (Heterogeneous Services)

Purpose: Represents the necessary info for all users who are waiting for the heterogeneous service response. Provides the statement to kill the associated session if any one is hanging with undue time.

Recovering by Oracle Flashback

Oracle Flashback Technology provides the DBA the ability to recover  to a specified point in time in the past very quickly, easily, and without taking any part of the database offline.

Flashback Table: Row movement must be enabled on the table.


System Change Number (SCN) This is a sequential counter, identifying precisely a moment in the database.
This is the most accurate way to identify a moment in time.
SQL> SELECT CURRENT_SCN FROM V$DATABASE; -- to get the current change number.

Wednesday, December 08, 2010

Reover DB - SQL Plus

recover database until cancel
Do the following to test the scenerio
1- Take full db backup

Diffrent Clauses to be used in RMAN script

There can be many different clauses can be used in RMAN script. You should use any clause which suites your environment. I'm providing them as single by single so that concept is clear with the help of the output.

Restore With RMAN (Examples)

Steps for media recovery: 
1. Mount or open the database.
Mount the database when performing whole database recovery, or open the database when performing online tablespace recovery.

Tuesday, December 07, 2010

RMAN Backup Intro - Example (10gR2)

RMAN creates compressed backups of the physical database; if you prefer, rman zips the database, entirely or not. For taking RMAN backup, your database should be in archivelog mode.

How to get a list of installed operating system patches and fixes

Sometimes, in the release notes or installation guide, there is a reference
to a system patch, or a specific upgrade that needs to be present on the 
system prior to the installation. In order to verify these patches and upgrades, the
list below has been compiled to determine the current system version and patch versions 
previously applied.

Diskpart Command-Line Utility

When we install Oracle RAC 10g on Windows 2003, we have to use the Diskpart utility as below

Monday, December 06, 2010

RAC Utilities

OIFCFG (Oracle Interface Configuration):
A typical NIC is uniquely identified with its name, address, subnet mask and type. The type of NIC indicates the purpose for which the network is configured. Moreover, in our RAC we have the following supported interfaces:

Administring Clusterware (10gR2)

Oracle Clusterware is the cross platform cluster software required to run the Real Application Clusters (RAC) option for Oracle Database. It provides the basic clustering services at the operating system level that enable Oracle software to run in clustering mode.

Connecting DB with Connect Naming

Easy Connect Naming methods were introduced in Oracle 10g. The benefits were simple. The need to do a service name lookup in the tnsnames.ora files for TCP/IP environment was eliminated, a directory naming system is no longer needed and clients are connected automatically. Connecting is also fairly simple:

SQL@ > conn scott/tiger@OR1/ASMDB  (ASMDB is the service - instance, OR1 is the DB server)

SQL@ OR11/HOME> conn scott/tiger@OR1/HOME

SQL Plus Options

If you want to change the SQL*Plus prompt, so that it automatically includes the name of the current instance, you will have to type the following commands in your SQL*Plus:

If you want to set the permanently then have it in glogin.sql on your client machine.
For example on my machine I set as below



Voting Disk and OCR Files Backup (Win 10gR2)

Voting Disk:
The voting disk is nothing but a file that contains and manages information of all the node memberships. Oracle RAC uses the voting disk to determine which instances are members of a cluster. The voting disk must reside on shared disk. For high availability, Oracle recommends that you have multiple voting disks. The Oracle Clusterware enables multiple voting disks but you must have an odd number of voting disks, such as three, five, and so on. If you define a single voting disk, then you should use external mirroring to provide redundancy.
I've windows environment for my RAC so I will not be using dd instead I'll use Oracle supplied utility ocopy
-- where Vote Disk is available
D:\oracle\product\10.2.0\crs\BIN>crsctl query css votedisk
 0.     0    \\.\votedsk1

located 1 votedisk(s).

Back it up
D:\oracle\product\10.2.0\crs\BIN>ocopy \\.\\votedsk1 D:\Home_Backup\VOTEnOCR

Note: If you have multiplexed your voting disk locations then make sure to back up EACH of these!

--Info in Vote file
D:\oracle\product\10.2.0\crs\BIN>olsnodes -n -v
prlslms: Initializing LXL global
prlsndmain: Initializing CLSS context
prlsmemberlist: No of cluster members configured = 256
prlsmemberlist: Getting information for nodenum = 1
prlsmemberlist: node_name = db1
prlsmemberlist: ctx->lsdata->node_num = 1
prls_printdata: Printing the node data
db1    1
prlsmemberlist: Getting information for nodenum = 2
prlsmemberlist: node_name = db2
prlsmemberlist: ctx->lsdata->node_num = 2
prls_printdata: Printing the node data
db2    2
prlsndmain: olsnodes executed successfully
prlsndterm: Terminating LSF

Changes in 11gR2
There some changes in the way we handle now the important Clusterware components Voting Disk and Oracle Cluster Registry (OCR): Amazingly, we can now store the two inside of an Automatic Storage Management (ASM) Disk Group, which was not possible in 10g.

The Voting Disk (or Voting File, as it is now also referred to) is not striped but put as a whole on ASM Disks – if we use a redundancy of normal on the Diskgroup, 3 Voting Files are placed, each on one ASM Disk. This is a concern, if our ASM Diskgroups consist of only 2 ASM Disks! Therefore, the new quorum failgroup clause was introduced:

create diskgroup data normal redundancy
 failgroup fg1 disk 'ORCL:ASMDISK1'
 failgroup fg2 disk 'ORCL:ASMDISK2'
 quorum failgroup fg3 disk 'ORCL:ASMDISK3'
 attribute 'compatible.asm' = '';

The failgroup fg3 above needs only one small Disk (300 MB should be on the safe side here, since the Voting File is only about 280 MB in size) to keep one Mirror of the Voting File. fg1 and fg2 will contain each one Voting File and all the other stripes of the Database Area as well, but fg3 will only get that one Voting File.
[root@uhesse1 ~]#  /u01/app/11.2.0/grid/bin/crsctl query css votedisk ##  STATE    File Universal Id                File Name Disk group --  -----    -----------------                --------- --------- 1. ONLINE   511de6e64e354f9bbf4be318fc928c28 (ORCL:ASMDISK1) [DATA] 2. ONLINE   2f1973ed4be84f50bffc2475949b428f (ORCL:ASMDISK2) [DATA] 3. ONLINE   5ed44fb7e79c4f79bfaf09b402ba70df (ORCL:ASMDISK3) [DATA]

Another important change regarding the Voting File is that it is no longer supported to take a manual backup of it with dd or ocopy. Instead, the Voting File gets backed up automatically into the OCR
OCR is a file that manages the cluster and RAC configuration. OCR contains information pertaining to instance-to-node mapping, node list and resource profiles for customized applications in your Clusterware.

Oracle Clusterware automatically creates OCR backups every four hours and it always retains the last three backup copies of the OCR.   The CRSD process that creates the backups also creates and retains an OCR backup for each full day and then at the end of a week a complete backup for the week. So there is a robust backup taking place in the background. And you guessed it right; you cannot alter the backup frequencies. This is meant to protect you, the DBA, so that you can copy these generated backup files at least once daily to a different device from where the primary OCR resides. These files are located at %CRS_home/cdata/my_cluster.

OCR Location:
D:\oracle\product\10.2.0\crs\BIN>ocrconfig -showbackup

db1     2010/12/06 09:19:48     D:\oracle\product\10.2.0\crs\cdata\crs
db1     2010/12/06 05:19:48     D:\oracle\product\10.2.0\crs\cdata\crs
db1     2010/12/06 01:19:48     D:\oracle\product\10.2.0\crs\cdata\crs
db1     2010/12/05 01:19:48     D:\oracle\product\10.2.0\crs\cdata\crs
db1     2010/11/25 01:19:45     D:\oracle\product\10.2.0\crs\cdata\crs

Create a batch file to copy to some safe location (10gR2):
#Backup the voting disk to .81
ocopy \\.\votedsk1 \\\additional_backups\databases\homerac\VOTEnOCRBackup\votingdisk
#Backup OCR .81
xcopy D:\oracle\product\10.2.0\crs\cdata\*.* \\\additional_backups\databases\homerac\VOTEnOCRBackup\ocr /s/e
D:\oracle\product\10.2.0\crs\BIN>ocopy \\.\ocrcfg \\\additional_backups\databases\homerac\VOTEnOCRBackup


--Dumping info in OCR
D:\oracle\product\10.2.0\crs\BIN>ocrdump D:\Home_Backup\VOTEnOCR\ocrinfo
Check ocrinfo by text editor
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     128276
         Used space (kbytes)      :       3852
         Available space (kbytes) :     124424
         ID                       :  798127722
         Device/File Name         : \\.\ocrcfg
                                    Device/File integrity check succeeded

                                    Device/File not configured

         Cluster registry integrity check succeeded

Export OCR
D:\oracle\product\10.2.0\crs\BIN>ocrconfig -export ocr.backup

Changes in 11gR2
You can now do a manual backup of the OCR any time you like, without having to wait until that is done automatically – which is also still done:

D:\app\\grid\BIN>ocrconfig -manualbackup

or-11     2012/10/01 10:50:44     D:\app\\grid\cdata\homecrs\backup_20121001_105044.ocr
Manual backup location
D:\app\\grid\BIN>ocrconfig -backuploc D:\Home_Backup\ocr_voting
PROT-45: The specified Oracle Cluster Registry backup location D:\Home_Backup\ocr_voting is inaccessible.
PROC-49: The specified Oracle Cluster Registry backup location is inaccessible on nodes or-12.

you need to have the same folder on all the nodes for the backup location you specify in the ocrconfig command and then take the backup

D:\app\\grid\BIN>ocrconfig -backuploc D:\Home_Backup\ocr_voting

In Oracle Clusterware 11g Release 2 an additional component related to the OCR called the Oracle Local Registry (OLR) is installed on each node in the cluster. The OLR is a local registry for node specific resources and is used by cluster and non-cluster Oracle processes. One use of the OLR is to store the location of the of the voting disk. This information is used when the node is being started and attempts to join the cluster. The OLR file is located in the grid_home/cdata/.olr . You can view the status of the OLR file on each node by using the ocrcheck command with the –local parameter as seen here:
D:\app\\grid\BIN>Ocrcheck -local -config
Oracle Local Registry configuration is :
         Device/File Name         : D:\app\\grid\cdata\or-11.olr
You can also use the ocrdump to see the contents

D:\app\\grid\BIN>Ocrdump -local -stdout
Note: Oracle Clusterware can access the OCR and the voting disks present in ASM even if the ASM instance is down. As a result CSS can continue to maintain the Oracle cluster even if the ASM instance has failed.

Monitoring Oracle 10g RAC with Quest Spotlight on RAC

For a RAC DBA following are the normal questions
  • How is my cluster performing? 
  • How is my Database performance as a whole? 
  • Contention issues disk, io, cluster, interconnect? 
  • Latency problems within HSI (High Speed Interconnects)? Are there poor settings? Are my NIC's are set properly? Or is it a poorly configured RAC? 
  • Inter-node communication issues? Are too many "Hot Blocks" being transferred across nodes due to poor database design? 
  • Poor load balancing?      

If you are curious about the following questions then Spotlight on RAC by Quest Software may be the good choice for you. As I'm also using this product for my questions like above.
This product provides the following classes of functionalities to the RAC DBA.

1- Physical architecture, all of the essential components of a cluster: the individual instances, the interconnect and the I/O subsystem are displayed, including the essential data flows between them. 
The below example runs against my 2-node Windows 2003(64bit) Oracle 10g R2 RAC. 

2- Calculation of diagnostics and presenting them back to the interface as a whole. For instance, you might have latch contention on one node on a 6-node RAC and the aggregate of all the latches might just be fine for your RAC to function adequately. 

3- everal RAC specific alarms(email notification) such as:
  1. Uneven Load Distribution to the instances.
  2. Overhead due to cluster maintenance.
  3. High latency problems within the HSI (High Speed Interconnects).
  4. Excessive HSI communication (might point to partitioning data across nodes, for instance a typical hybrid , meaning OLTP (Online Transaction Processing) /DSS (Decision Support System) might require a data partitioning on the DSS part of that RAC database).
  5. Block Corruption / Orphaned blocks across instances.
  6. OCFS problems / Shared Disk (Whether on SAN/NAS) contention. 

SRVCTL Utility (10gR2)

We can do the following (but not limited to) by srvctl.
  • Database Configuration tasks such as:
    1.  Adding, modifying and deleting Cluster Database configuration.
    2.  Adding/or Deleting an instance or a service to/from the configuration of a Cluster Database.
    3.  Moving instances and services in a cluster database configuration and modifying service configurations.
    4.  Setting and unsetting the environment for an instance or service in a clustered database configuration.
    5.  Setting and unsetting the environment for an entire cluster database in a clustered database configuration.
  • Cluster Database Administration Tasks such as:
    1.  Starting and stopping cluster databases.
    2.  Starting and stopping cluster database instances.
    3.  Starting, stopping, and transfering cluster database services.
    4.  Getting statuses of cluster databases, cluster database instances, or cluster database services; this last one is interesting as having taken all the pains of setting the RAC, we will indeed be curious of their status.
  • Node Level Tasks such as:
    1.  Adding and deleting node level applications
    2.  Setting and unsetting the environment for node-level applications.
    3.  Managing node applications.
    4.  Managing ASM instances.
    5.  Starting and stopping a group of programs that includes virtual IP addresses, Listeners, Oracle Notification Services, and Oracle Enterprise Manager agents (for maintenance purposes).

    Getting on to SRVCTL

    C:\Documents and Settings\inam>srvctl -h  -- provides (long) list of commands
    Usage: srvctl [-V]
    Usage: srvctl add database -d -o [-m ] [-p ] [-A /netmask] [-r {PRIMA
    Usage: srvctl add instance -d -i -n
    Usage: srvctl add service -d -s -r "" [-a 
     C:\Documents and Settings\inam>srvctl config database 

    Querying the nodes for all running services

     C:\Documents and Settings\inam>srvctl config nodeapps -n DB1 -a -g -s -l
    VIP exists.: /
    GSD exists.
    ONS daemon exists.
    Listener exists.

    Querying the nodes for ASM Configurations

    C:\Documents and Settings\inam> srvctl config asm -n DB1
    +ASM2 D:\oracle\product\10.2.0\db_1

    Getting the Status of Various Services

    C:\Documents and Settings\inam>srvctl status database -d HOME -f -v
    Instance HOME1 is running on node db1
    Instance HOME2 is running on node db2

    C:\Documents and Settings\inam>srvctl status asm -n DB1
    ASM instance +ASM2 is running on node DB2.

    C:\Documents and Settings\inam>srvctl status database -d HOME -f -v -S HOME
    #@=info: operation={status} config={full} ver={}
    #@=stage[0]: abbrev={inst} desc={Instance}
    #@=inst[0]: name={HOME1} node={db1} oh={D:\oracle\product\10.2.0\db_1}
    #@=inst[1]: name={HOME2} node={db2} oh={D:\oracle\product\10.2.0\db_1}

    Instance HOME1 is running on node db1
    Instance HOME2 is running on node db2
    #@=done: status={0}

    Note: -S is for service, -v for verbose, -f Include disabled applications also

    C:\Documents and Settings\inam>srvctl -V
    srvctl version:

    To Stop the service 
    srvctl stop service -d HOME -s HOME

    Starting Up and Shutting Down with SRVCTL 

    srvctl start instance -d mydb -i "myinstance_list" [-o start_options] [-c connect_str | -q]

    To stop, do the following:
    srvctl stop instance -d mydb -i " myinstance_list" [-o stop_options] [-c connect_str | -q]
    To start and stop the entire RAC cluster database, meaning all of the instances, you will do the following from your SRVCTL in the command line:
    srvctl start database -d mydb [-o start_options] [-c connect_str | -q]
    srvctl stop database -d mydb [-o stop_options] [-c connect_str | -q]
    Stop all application processes.[root@racnode1 ~]# srvctl stop database -d racdb
    [root@racnode1 ~]# srvctl stop asm -n racnode1
    [root@racnode1 ~]# srvctl stop asm -n racnode2
    [root@racnode1 ~]# srvctl stop nodeapps -n racnode1
    [root@racnode1 ~]# srvctl stop nodeapps -n racnode2


    D:\app\11.2.0\grid\BIN>srvctl status diskgroup -g dbflash -n -or-11
    Disk Group dbflash is running on or-11 



Business Intellegence/DW - Inroduction

Business intelligence (BI) refers to computer-based techniques used in spotting, digging-out, and analyzing business data, such as sales revenue by products and/or departments, or by associated costs and incomes.
BI technologies provide historical, current, and predictive views of business operations. Common functions of business intelligence technologies are reporting, online analytical processing, analytics, data mining, business performance management, benchmarking, text mining, and predictive analytics.
Business intelligence aims to support better business decision-making. Thus a BI system can be called a decision support system (DSS).
BI uses technologies, processes, and applications to analyze mostly internal, structured data and business processes.
Often BI applications use data gathered from a data warehouse or a data mart. However, not all data warehouses are used for business intelligence, nor do all business intelligence applications require a data warehouse.
According to  Forrester Research:
 "Business Intelligence is a set of methodologies, processes, architectures, and technologies that transform raw data into meaningful and useful information used to enable more effective strategic, tactical, and operational insights and decision-making."

When using this definition, business intelligence also includes technologies such as data integration, data quality, data warehousing, master data management, text and content analytics, and many others that the market sometimes lumps into the Information Management segment.

Data warehouse

A data warehouse (DW) is a database used for reporting. The data is offloaded from the operational systems for reporting. A data warehouse maintains its functions in three layers: staging, integration and access.

A principle in data warehousing is that there is a place for each needed function in the DW. The functions are in the DW to meet the users' reporting needs. Staging is used to store raw data for use by developers (analysis and support). The integration layer is used to integrate data and to have a level of abstraction from users. The access layer is for getting data out for users.

This definition of the data warehouse focuses on data storage. The main source of the data is cleaned, transformed, catalogued and made available for use by managers and other business professionals for data mining, online analytical processing, market research and decision support

Normalized versus dimensional approach for storage of data 

According to Ralph Kimball’s approach  data warehouse should be modeled using a Dimensional Model/star schema while Bill Inmon's approach states that the data warehouse should be modeled using an E-R model/normalized model.

A key advantage of a dimensional approach is that the data warehouse is easier for the user to understand and to use. Also, the retrieval of data from the data warehouse tends to operate very quickly. Dimensional structures are easy to understand for business users. This is because of the fact that the structured is divided into measurements/facts and context/dimensions. Facts are related to the organization’s business processes and operational system whereas the dimensions surrounding them contain context about the measurement (Kimball, Ralph 2008).
The main disadvantages of the dimensional approach are:
  1. In order to maintain the integrity of facts and dimensions, loading the data warehouse with data from different operational systems is complicated, and
  2. It is difficult to modify the data warehouse structure if the organization adopting the dimensional approach changes the way in which it does business.

In the normalized approach, the data in the data warehouse are stored following, to a degree, database normalization rules. Tables are grouped together by subject areas that reflect general data categories (e.g., data on customers, products, finance, etc.).the normalized structure divides data into entities, which creates several tables in a relational database. When applied in large enterprises the result is dozens of tables that are linked together by a web of joints. Furthermore, each of the created entities is converted into separate physical tables when the database is implemented (Kimball, Ralph 2008). The main advantage of this approach is that it is straightforward to add information into the database. A disadvantage of this approach is that, because of the number of tables involved, it can be difficult for users both to:
  1. join data from different sources into meaningful information and then
  2. access the information without a precise understanding of the sources of data and of the data structure of the data warehouse.
It should be noted that both normalized – and dimensional models can be represented in entity-relationship diagrams as both contain jointed relational tables. The difference between the two models is the degree of normalization.
These approaches are not mutually exclusive, and there are other approaches. Dimensional approaches can involve normalizing data to a degree.



when Accessing Reports Server, but it is up and Ready per Trace Files 
Trying to access the Reports Server gives following error:
     REP-51002: Bind to Reports Server failed
This error is given when you try to access Reports Server using getserverinfo or showjobs.  For example:

Also, running the command " -findAll" gives error:
     REP-50503 No server found in the network
However, in the Reports Server trace files (for example D:\Oracle10GASSuite\reports\logs\rep_dev), it shows that the Reports Server is actually up and running.  There are no errors in the trace files. 
This is a firewall permissions problem. 
It is not a problem with the Reports Server, as the server traces show it is up and running.


To implement the solution, please execute the following steps:

1. Open the port on the firewall to allow the Reports Server to be accessible.
a) Run command: -findAll
b) From output, check what value is used for Channel port. For example:
Broadcast mechanism used to locate servers
Channel address = ###.###.###.###
Channel port = 14021
c) Open the channel port on the firewall, in this case, it is port 14021.
2. In the firewall, authorize requests from your host's IP address to ###.###.###.### (the channel address shown above).
Note:   If the above still does not work, then you may try adding the name of the machine where Reports Server is and the IP for that machine to the hosts file (/etc/hosts or C:\WINDOWS\system32\drivers\etc\hosts). For example:

Ref:  415407.1


One of the questioned asked today about the limitation of autonomous_trasaction pragma (10gR2). 
Signifies that the statement is a pragma (compiler directive). Pragmas are processed at compile time, not at run time. They pass information to the compiler.
You can apply this pragma to:

Top-level (not nested) anonymous PL/SQL blocks
 Local, standalone, and packaged functions and procedures
Methods of a SQL object type
Database triggers

You cannot apply this pragma to an entire package or an entire an object type. Instead, you can apply the pragma to each packaged subprogram or object method.
You can code the pragma anywhere in the declarative section. For readability, code the pragma at the top of the section.
Once started, an autonomous transaction is fully independent. It shares no locks, resources, or commit-dependencies with the main transaction. You can log events, increment retry counters, and so on, even if the main transaction rolls back.

Unlike regular triggers, autonomous triggers can contain transaction control statements such as COMMIT and ROLLBACK, and can issue DDL statements (such as CREATE and DROP) through the EXECUTE IMMEDIATE statement.

In the main transaction, rolling back to a savepoint located before the call to the autonomous subprogram does not roll back the autonomous transaction. Remember, autonomous transactions are fully independent of the main transaction.

If an autonomous transaction attempts to access a resource held by the main transaction (which cannot resume until the autonomous routine exits), a deadlock can occur. Oracle raises an exception in the autonomous transaction, which is rolled back if the exception goes unhandled.

If you try to exit an active autonomous transaction without committing or rolling back, Oracle raises an exception. If the exception goes unhandled, or if the transaction ends because of some other unhandled exception, the transaction is rolled back.

CREATE TABLE emp_audit ( emp_audit_id NUMBER(6), up_date DATE, 
                         new_sal NUMBER(8,2), old_sal NUMBER(8,2) );

   AFTER UPDATE OF salary ON employees FOR EACH ROW
-- bind variables are used here for values
   INSERT INTO emp_audit VALUES( :old.employee_id, SYSDATE, 
                                 :new.salary, :old.salary );

TNS-12535 / ORA-12535 on Connection to Database



Oracle Net connections to database fail with error code TNS-12535 / ORA-12535: TNS:operation timed out. Oracle Net client trace can show large delay upon resolving TNS listener host

Operating system error connection timed out. The TNS listener is unable to service the requests quickly enough. The TNS listener has a queue (backlog) of connection requests that it needs to service. If it is unable to service these quick enough, they will start to timeout.


The size of the TNS listener queue can be influenced by setting parameter QUEUESIZE in the listener.ora file.   To specify a queue size for a listener, enter a value to the QUEUESIZE parameter at the end of any listening address:

Default LISTENER.ORA queue sizes
Solaris default QUEUESIZE is 5
Aix default QUEUESIZE is 1024
Linux Default QUEUESIZE is 1024
Windows default QUEUESIZE is 50
Ref: 214122.1
Brief notes:
- In essence, the ORA-12535/TNS-12535 is a timing issue between the client and server.
- The TNS-12535 or ORA-12535 error is commonly a timeout error associated with Firewalls or slow Networks.
- It can also be due to an incorrect "timeout" parameter setting for the following files:
listener.ora -->
CONNECT_TIMEOUT_ (8.1.x and lower only)
sqlnet.ora -->
Ref: 125351.1
3-  TNSPING command fails with the TNS-12535 error:


Personal Firewall enabled on Client system.


To implement the solution, please execute the following steps:

Add the Oracle Client side applications to the Windows Firewall Exception List.

Click Start, Control Panel, and double click 'Windows Firewall'  Next, select the 'Exceptions Tab'
Use the 'Add Program...' dialog to add the programs to the Exception List.
If unable to navigate to the 'Windows Firewall' configurations window, you may be able to manually access the Window Firewall in Windows XP SP2 window by:
Clicking Start, click Run, type Firewall.cpl, and then click OK  Then from there you can edit the programs from the Exceptions Tab.

Ref: 341216.1

4-  Connect as SYSDBA Fails with TNS-12535 ,Can not start database
cause: Log files were being written from the ORASRV_BEQ_.COM file and the 32k version limit had been reached
Delete the log files.

Sunday, December 05, 2010

Live Reporting with Data Pump -GG

Please review Live Reporting with OGG first.

We can add a data pump to add storage flexibility and to offload the overhead of filtering and conversion processing from the source system. In this configuration, the primary Extract writes to a local data pump and trail, and then the data pump sends the data to a remote trail . A data pump on the intermediary system reads the trail and moves the data to a remote trail on the target, which is read by a Replicat group.
The data pump on the source system is optional, but will help to protect against data loss in the event of a network outage.

Source System
 Configure Primary Extract
GGSCI (HOME-03-DB12) 31> ADD EXTRACT EXTPRRPT, tranlog, begin now
EXTRACT added.

This is the local trail on the source system where the Extract process will write to and which is then read by the Data Pump process. We link this local trail to the Primary Extract group

-- Create the parameter file for primary Extract group.
GGSCI (HOME-03-DB12) 31> edit params extrptpr
-- Identify the Extract group:
EXTRACT extrptpr
-- Specify database login information as needed for the database (source):
USERID ggs_owner@homedev, PASSWORD ggs_owner;
-- Specify the local trail on the source system:
--Specify transaction log options
-- Specify tables to be captured:
TABLE scott.emp; -- you can put the scott.* if you want to include all the tables.

Configure data pump
Add EXTRACT for data pump. Use EXTTRAILSOURCE as the data source option, and specify the name of the local trail. 
EXTRACT added.

Use the ADD RMTTRAIL command to specify a remote trail that will be created on remote or  the intermediary system. Use the EXTRACT argument to link the remote trail to the extdpump data pump group. The linked data pump writes to this trail.

GGSCI (HOME-03-DB12) 38> ADD RMTTRAIL Y:\dirdat\rt, EXTRACT extdpump
GGSCI (HOME-03-DB12) 39> edit params extdpump
EXTRACT extdpump
USERID ggs_owner@homedev, PASSWORD ggs_owner
--RMTTRAIL to specify the location of the remote trail and associate the same with the Data Pump group as --it will be wriiten to over the network by the data pump process
RMTTRAIL Y:\dirdat\rt;
TABLE scott.emp;To use PASSTHRU mode, the names of the source and target objects must be identical. No column mapping, filtering, SQLEXEC functions, transformation, or other functions that require data manipulation can be specified in the parameter file.

Target System
On target system add the replicat group.
GGSCI (HOME-03-DB12) 30> ADD REPLICAT reprptdp, EXTTRAIL Y:\dirdat\rt, begin now
Edit parameter for replicat group
GGSCI (HOME-03-DB12) 31> edit params reprptdp
REPLICAT reprptdp
USERID ggs_owner@asmdb, PASSWORD ggs_owner
MAP scott.EMP, TARGET scott.EMP;

Source System
GGSCI (HOME-03-DB12) 40> start extrptdp
Sending START request to MANAGER ...
GGSCI (HOME-03-DB12) 41> start extdpump
Sending START request to MANAGER ...

Target System
GGSCI (HOME-03-DB12) 10> start replicat reprptdp

Now change some data on source (EMP) and observe the change on target

NOTE:I've used the same machine (Win200364bit) for source and target system. I emulated this as having two different folders for GG source and target.

Live Reporting with OGG

Reporting configuration:

Source system
To configure the Manager process
Step 1: On the source, configure the Manager process according to the instructions in tutorial
Step 2: On the source configure the Extract group, use the ADD EXTRACT command to create an Extract group.
GGSCI (HOME-03-DB12) 18> ADD EXTRACT extrpt, TRANLOG, BEGIN now -- see OGG Reference
EXTRACT added.

Step 3: On the source, use the ADD RMTTRAIL command to specify a remote trail to be created on the target system. Use the EXTRACT argument to link this trail to the Extract group.
GGSCI (HOME-03-DB12) 21> ADD RMTTRAIL Y:\dirdat\rt, EXTRACT extrpt
Step 4: On the source, use the EDIT PARAMS command to create a parameter file for the Extract
group. Include the following parameters plus any others that apply to your database environment.

GGSCI (HOME-03-DB12) 22> edit params extrpt
-- Identify the Extract group:
EXTRACT extrpt
-- Specify database login information as needed for the database (source):
USERID ggs_owner@homedev, PASSWORD ggs_owner;
-- Specify the name or IP address of the target system:
-- Specify the remote trail on the target system:
RMTTRAIL Y:\dirdat\rt;
--Specify transaction log options
-- Specify tables to be captured:
TABLE scott.emp; -- you can put the scott.* if you want to include all the tables.

Note: I used the above tranlogoptions , because my source database (HOMEDEV) was on 32bit windows system and OGG processes were running on another system Win64bit. I made the logs location shared for me and then used the "Map Network Drive" option in windows (64bit) to expose the location to OGG process.
Target system
Step 5: configure the Manager process on the target, configure the Manager process according to the tutotial
In the Manager parameter file, use the PURGEOLDEXTRACTS parameter to control the purging of files from the local trail.

Step 6: On the target, create a Replicat checkpoint table. All Replicat groups can use the same checkpoint table.
GGSCI (HOME-03-DB12) 10> dblogin userid ggs_owner@asmdb, password ggs_owner
Successfully logged into database.
Successfully created checkpoint table GGS_OWNER.CHKPTAB.

Step 7: On the target, use the ADD REPLICAT command to create a Replicat group. Use the EXTTRAIL argument to link the Replicat group to the remote trail.
GGSCI (HOME-03-DB12) 12> ADD REPLICAT reprpt, EXTTRAIL Y:\dirdat\rt, begin now

Step 8: On the target, use the EDIT PARAMS command to create a parameter file for the Replicat group.
Include the following parameters plus any others that apply to your database environment.
GGSCI (HOME-03-DB12) 14> edit params reprpt
-- Identify the Replicat group:
-- State whether or not source and target definitions are identical:
-- Specify database login information as needed for the database:
USERID ggs_owner@asmdb, PASSWORD ggs_owner
-- Specify error handling rules:
REPERROR (, )  -- will be discussed in some other post
-- Specify tables for delivery:
MAP scott.EMP, TARGET scott.EMP;

Test your replication , make modification to your source (EMP on source db) and verify on target. 

Related Posts:
Oracle GoldenGate Tutorial Part 1 - Concepts and Architecture 
Oracle GoldenGate Tutorial Part 2 - Installation (Windows 2003) 
Oracle GoldenGate Tutorial Part 3 - Manager process 
Oracle GoldenGate Tutorial Part 4 - Working with OGG 
Live Reporting with Data Pump -GG 
OGG Reference - Commands and Options 
SQL Server to SQL Server Replication using GG

Saturday, December 04, 2010

Oracle GoldenGate Tutorial Part 4 - Working with OGG

As we know have basic configurations and information about OGG, we can start with first complete replication example. We will use the initial data load for this example, later on we will see other examples for online synchronization and DDL.

  •  Source DB (eg;  HOMEDEV)
  •  Target DB (eg;   ASMDB, I've ASM Oracle database as target but you can have without ASM)
  •  USER SCOTT is existing on source and target

Step 1:   On Source, start the ggsci session
Oracle GoldenGate Command Interpreter for Oracle
Version Build 078
Windows x64 (optimized), Oracle 10 on Jul 28 2010 14:52:12
Copyright (C) 1995, 2010, Oracle and/or its affiliates. All rights reserved.
GGSCI (DB2) 1> start mgr
Note: For me the manager process is  running on 7808 for Source system.
Step 2: Create extract process "extload1" on the source system. This will be one time data extract task so the source of the data is not the transaction log files of the database but the data itself. Thus we will use keyword SOURCEISTABLE with the add extract command.
EXTRACT added.
Step 3: Create the parameter file for the extract group extload1
GGSCI (DB2) 5> edit params extload1 (notepad will be opened set the parameters)
EXTRACT extload1  -- the name of extract created earlier
USERID ggs_owner@homedev,PASSWORD ggs_owner  -- OGG owner doing the work for extract
RMTHOST DB2,MGRPORT 7809  --remote host for the replicat process with manager --process with port 7809
RMTTASK replicat,GROUP repload1  -- remote task type and the group responsible to perform that task
TABLE SCOTT.EMP;  -- Table to be extracted,Assume that EMP table on target (ASMDB) is empty for --this example

Step 4: On Target , create the initial data load task "repload1" for replication. As this will be one time task, we will use the keyword SPECIALRUN

GGSCI (DB2) 6> start mgr
Manager started.

Step 5: Create the parameter file for the Replicat group, "repload1"
GGSCI (DB2) 5> edit params repload1 (notepad will be opened, set parameters)

REPLICAT repload1 -- process/group name for replicat
USERID ggs_owner@asmdb, PASSWORD ggs_owner -- OGG db user in target databse
ASSUMETARGETDEFS  -- used when the source and target tables specified with a MAP statement have --identical column structure,
MAP scott.emp, TARGET scott.emp;  -- to establish a relationship between one or more source and target --objects.

Step 6: Start the initial load data extract task on the source system. Since this is a one time task, we will initially see that the extract process is running and after the data load is complete it will be stopped.
We do not have to manually start the Replicat process on the target as that is done when the Extract task is started on the source system.
GGSCI (DB2) 6> start mgr
Manager started.

GGSCI (DB2) 13> start extract extload1

Check on the source SCOTT.EMP and you will see the rows there.

NOTE: As now you are familiar with golden gate, creating parameter files for the processes etc, I'll be posting my further posts without the title Oracle GoldenGate Tutorial.

Related Posts:
Oracle GoldenGate Tutorial Part 1 - Concepts and Architecture 
Oracle GoldenGate Tutorial Part 2 - Installation (Windows 2003) 
Oracle GoldenGate Tutorial Part 3 - Manager process 
Live Reporting with OGG 
Live Reporting with Data Pump -GG 
OGG Reference - Commands and Options 
SQL Server to SQL Server Replication using GG

Oracle GoldenGate Tutorial Part 1 - Concepts and Architecture

What is Oracle Golden Gate
Oracle GoldenGate is a comprehensive software package for enabling the replication of data in heterogeneous data environments.
The product set enables high availability solutions, real-time data integration, transactional change data capture, data replication,
transformations, and verification between operational and analytical enterprise systems.
It enables the exchange and manipulation of data at the transaction level as well as changes to DDL (data definition language2) across a variety of topologies.
With this flexibility, and the filtering, transformation, and custom processing features of
Oracle GoldenGate, you can support numerous business requirements:
● Business continuance and high availability.
● Initial load and database migration.
● Data integration.
● Decision support and data warehousing.

Oracle GoldenGate architecture

Oracle GoldenGate is composed of the following components:
● Extract Process
● Data pump Process
● Replicat Process
● Trails or extract files
● Checkpoints
● Manager Process
● Collector Process

The Extract process runs on the source system and is the extraction (capture) mechanism
of Oracle GoldenGate. You can configure Extract in one of the following ways:
● Initial loads: For initial data loads, Extract extracts a current set of data directly from
their source objects.
● Change synchronization: To keep source data synchronized with another set of data,
Extract captures changes made to data (typically transactional inserts, updates, and
deletes) after the initial synchronization has taken place. DDL changes and sequences
are also extracted, if supported for the type of database that is being used.
Data pumps
This is the optional GG process runs on the source system. Without data pump process, extract must send data to a remote trail on the target. If you use data pump then primary extract process writes to a trail on the source system. The data pump reads this trail and sends the data over the network to a remote trail on the target.
The Replicat process runs on the target system. Replicat reads extracted data changes and
DDL changes (if supported) that are specified in the Replicat configuration, and then it replicates them to the target database.
To support the continuous extraction and replication of database changes, Oracle GoldenGate stores the captured changes temporarily on disk in a series of files called a trail.
Trail files are created as needed during processing, and they are aged automatically to allow processing to continue without interruption for file maintenance. All file names in a trail begin with the same two characters, which you assign when you create the trail.
Checkpoints store the current read and write positions of a process to disk for recovery purposes. These checkpoints ensure that data changes that are marked for synchronization actually are extracted by Extract and replicated by Replicat, and they prevent redundant processing.
Manager is the control process of Oracle GoldenGate. Manager must be running on each system in the Oracle GoldenGate configuration before Extract or Replicat can be started. It performs managerial tasks like monitoring , restarting the golden gate processes , maintaining trail files etc.
Collector is a process that runs in the background on the target system. Collector receives extracted database changes that are sent across the TCP/IP network, and it writes them to a trail or extract file. Typically, Manager starts Collector automatically when a network connection is required. When Manager starts Collector, the process is known as a dynamic Collector, and Oracle GoldenGate users generally do not interact with it.

Other Concepts
Processing Group
To differentiate among multiple Extract or Replicat processes on a system, you define processing groups.
A processing group consists of a process (either Extract or Replicat), its parameter file, its checkpoint file, and any other files associated with the process. For Replicat, a group also includes a checkpoint table, if one is being used.
Commit Sequence Number (CSN)
A CSN is an identifier that Oracle GoldenGate constructs to identify a transaction for the purpose of maintaining transactional consistency and data integrity. It can be required to position Extract in the transaction log, to reposition Replicat in the trail, or for other purposes.
Each kind of database management system generates some kind of unique serial number of its own at the completion of each transaction, which uniquely identifies that transaction.  Extract writes a normalized form of the CSN to external storage such as the trail files and the checkpoint file.

Related Posts:
Oracle GoldenGate Tutorial Part 2 - Installation (Windows 2003) 
Oracle GoldenGate Tutorial Part 3 - Manager process 
Oracle GoldenGate Tutorial Part 4 - Working with OGG 
Live Reporting with OGG 
Live Reporting with Data Pump -GG 
OGG Reference - Commands and Options 
SQL Server to SQL Server Replication using GG

Oracle GoldenGate Tutorial Part 2 - Installation (Windows 2003)

1- Download the GG software from the following location
I used the following for my platform (Windows 2003 Server)
Oracle GoldenGate v11. for Oracle 11g 64 bit on Windows XP, 2003, 2008(13 MB)

2- Download Oracle Client ( if you don't have already installed), test your client with some existing Oracle database. You don't need to install the client if you are using the same machine for Oracle Golden Gate where Oracle database is running in my case, my Oracle Databases 10g (source + target) were running on Windows 32bit and I had to install the OGG on Windows 64bit so I installed the Oracle Client on that machine with necessary modification in tnsnames.ora for Source and Target databases.
3- As OGG must be running on source and target do the following to get it installed on both systems.
i) Extract the Oracle GoldenGate zip file by winzip to a drive eg; E:
ii) Run the command shell and change directories to the new Oracle GoldenGate directory.
     C:\Documents and Settings\inam>e:
iii) From the Oracle GoldenGate directory, run the GGSCI program.

Oracle GoldenGate Command Interpreter for Oracle
Version Build 078
Windows x64 (optimized), Oracle 10 on Jul 28 2010 14:52:12
Copyright (C) 1995, 2010, Oracle and/or its affiliates. All rights reserved.
GGSCI (DB2) 1>
iv) In GGSCI, issue the following command to create the Oracle GoldenGate working directories.
GGSCI (DB2) 1> create subdirs
Creating subdirectories under current directory E:\OGG\GoldenGate
Parameter files                D:\Temp\GG\GG_SOURCE_HOMEDEV_ONSRV1\dirprm: created
Report files                   D:\Temp\GG\GG_SOURCE_HOMEDEV_ONSRV1\dirrpt: created
Checkpoint files             D:\Temp\GG\GG_SOURCE_HOMEDEV_ONSRV1\dirchk: created
Process status files          D:\Temp\GG\GG_SOURCE_HOMEDEV_ONSRV1\dirpcs: created
SQL script files            D:\Temp\GG\GG_SOURCE_HOMEDEV_ONSRV1\dirsql: created
Database definitions files    D:\Temp\GG\GG_SOURCE_HOMEDEV_ONSRV1\dirdef: created
Extract data files             D:\Temp\GG\GG_SOURCE_HOMEDEV_ONSRV1\dirdat: created
Temporary files                D:\Temp\GG\GG_SOURCE_HOMEDEV_ONSRV1\dirtmp: created
Veridata files                D:\Temp\GG\GG_SOURCE_HOMEDEV_ONSRV1\dirver: created
Veridata Lock files            D:\Temp\GG\GG_SOURCE_HOMEDEV_ONSRV1\dirver\lock: created
Veridata Out-Of-Sync files     D:\Temp\GG\GG_SOURCE_HOMEDEV_ONSRV1\dirver\oos: created
Veridata Out-Of-Sync XML files D:\Temp\GG\GG_SOURCE_HOMEDEV_ONSRV1\dirver\oosxml: created
Veridata Parameter files      D:\Temp\GG\GG_SOURCE_HOMEDEV_ONSRV1\dirver\params: created
Veridata Report files          D:\Temp\GG\GG_SOURCE_HOMEDEV_ONSRV1\dirver\report: created
Veridata Status files          D:\Temp\GG\GG_SOURCE_HOMEDEV_ONSRV1\dirver\status: created
Veridata Trace files           D:\Temp\GG\GG_SOURCE_HOMEDEV_ONSRV1\dirver\trace: created
Stdout files                   D:\Temp\GG\GG_SOURCE_HOMEDEV_ONSRV1\dirout: created

v) Issue the following command to exit GGSCI.
GGSCI (DB2) 2> Exit

Preparing Database for Golden Gate
We then need to create a database user which will be used by the GoldenGate Manager, Extract and Replicat processes. We create the user GGS_OWNER and grant it the required privileges.
SQL> create tablespace ggs_data
2 datafile ‘D:\oracle\product\10.2.0\oradata\HOMEDEV\HOMEDEV\GGS_DATA.DBF’ size 200m;
SQL> create user ggs_owner identified by ggs_owner
2 default tablespace ggs_data
3 temporary tablespace temp;
User created.
SQL> grant connect,resource to ggs_owner;
Grant succeeded.
SQL> grant select any dictionary, select any table to ggs_owner;
Grant succeeded.
SQL> grant create table to ggs_owner;
Grant succeeded.
SQL> grant flashback any table to ggs_owner;
Grant succeeded.
SQL> grant execute on dbms_flashback to ggs_owner;
Grant succeeded.
SQL> grant execute on utl_file to ggs_owner;
Grant succeeded.

 We can then confirm that the GoldenGate user we have just created is able to connect to the Oracle database

GGSCI (DB2) 4> DBLOGIN  USERID ggs_owner@homedev, PASSWORD ggs_owner
Successfully logged into database.

Enable Supplemental Logging
We need to enable supplemental logging at the database level otherwise we will get this error when we try to start the Extract process -


SELECT SUPPLEMENTAL_LOG_DATA_MIN,supplemental_log_data_pk, supplemental_log_data_ui FROM V$DATABASE;

Related Posts:
Oracle GoldenGate Tutorial Part 1 - Concepts and Architecture 
Oracle GoldenGate Tutorial Part 3 - Manager process 
Oracle GoldenGate Tutorial Part 4 - Working with OGG 
Live Reporting with OGG 
Live Reporting with Data Pump -GG 
OGG Reference - Commands and Options 
SQL Server to SQL Server Replication using GG

Oracle GoldenGate Tutorial Part 3 - Manager process

1. From the Oracle GoldenGate directory, run the ggsci program to open the Oracle GoldenGate Software Command Interface, commonly known as GGSCI.
2. In GGSCI, issue the following command to edit the Manager parameter file.
GGSCI (DB2) 1> edit params mgr  (notepad will be opened)
3. Add the following parameter to specify the Manager port number.
PORT 7808 --defines the port number on which Manager runs on the local system.
USERID ggs_owner@homedev,PASSWORD ggs_owner -- db user used by gg processes
PURGEOLDEXTRACTS D:\Temp\GG\GG_SOURCE_HOMEDEV_ONSRV1\dirdat\ex,USECHECKPOINTS  -- allows you to manage trail files in a centralized fashion and take into account multiple processes.

Start/stop Manager from GGSCI

GGSCI (DB2) 2> start mgr
Manager started.

GGSCI (DB2) 3> stop mgr
Manager process is required by other GGS processes.
Are you sure you want to stop it (y/n)? y

Sending STOP request to MANAGER ...
Request processed.
Manager stopped.

GGSCI (DB2) 4> info mgr

Manager is DOWN!

Related Posts:
Oracle GoldenGate Tutorial Part 1 - Concepts and Architecture 
Oracle GoldenGate Tutorial Part 2 - Installation (Windows 2003) 
Oracle GoldenGate Tutorial Part 4 - Working with OGG 
Live Reporting with OGG 
Live Reporting with Data Pump -GG 
OGG Reference - Commands and Options 
SQL Server to SQL Server Replication using GG