Please see my other blog for Oracle EBusiness Suite Posts - EBMentors

Search This Blog

Note: All the posts are based on practical approach avoiding lengthy theory. All have been tested on some development servers. Please don’t test any post on production servers until you are sure.

Tuesday, March 30, 2010

Streams 10gR2 - How to


--> Streams 10gR2
This write up assumes you have good understanding of database administration and basic understanding of Streams at least theoretical concepts. I suggest that you must read this write up first carefully and then start doing. You need to make changes in your tnsnames.ora for creating services for your participating databases. You may need to change the script here according to your environment. If you face problem then you can write to me using linkedin.
Overview:
Streams can be used for such diverse tasks as data replication, data warehousing, database migration between platforms, application upgrade or migration, and queuing applications.

Streams can be configured in a number of different ways depending on the business requirements. For example, Streams can be configured to:

  • Replicate data from a database to one or more databases, even if those databases have different structures or naming conventions.
  • Replicate data between hardware platforms and/or database releases and/or character sets.
  • Consolidate data from multiple sources with varying structures into a single database.
  • Provide high availability while performing database or application upgrades or while migrating between hardware platforms.
We used the streams for replication (One-Way) in Ministry Of Higher Education KSA for their Finance Application’s Database only. It was schema level because only one application schema is being used by the application. For the purpose of this write up I’ll use the HR schema as first we performed our test on the same and off course discussing Ministry application schema is irrelevant and also complex as we wana develop visual interface also to manage the streams easily and dynamically. We can include/exclude the tables for capture process via this interface.
Replication using Oracle Streams is implemented in the following way:
1.-A background capture process is configured to capture changes made to tables,schemas, or the entire database. The capture process captures changes from the redo log and formats each captured change into a logical change record (LCR). The capture process uses logminer to mine the redo/archive logs to format LCRs.
2.-The capture process enqueues LCR events into a queue that is specified.
3.-This queue is scheduled to Propagate events from one queue to another in a different database.
4.-A background apply process dequeues the events and applies them at the destination database.
Moving to Configuration
You need to know the following first of all
1- Source/Capture Site – Global Database name of the Source
(For this write up it is TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM)
2- Target/Apply Site - Global Database name of the Target
(For this write up it is APPLYTEST.REGRESS.RDBMS.DEV.US.ORACLE.COM)
3- STRMADMIN - Streams Administrator with password strmadmin (to be created)
4- HR - Source schema to be replicated (already installed if you installed examples or just install with the help of scripts provided in RDBMS folder of you installation if not existing already)
Note: We will replicate both DML and DDL.
Configuring Streams related Parameters:
To ensure a successful Streams implementation, use the following recommendations when constructing a Streams environment in the 10.2 release of the Oracle database.
  • Software Version
  • Database Settings: Parameters, Storage, and Privileges
  • Source Site Configuration
  • Target Site Configuration

Software Version

Oracle recommends to run streams with the latest available patchset (currently 10.2.0.4), although I first tested it on 10.2.0.1.0 and it worked successfully.

Database Parameters

For best results in a Streams environment, there are no of parameters (about 11) can be set but I just set the following parameters at each participating instance (For this write up TEST and APPLYTEST)

Parameter & Recommendation

Description

Consideration

_job_queue_interval = 1

Scan rate interval (seconds) of job queue. Default is 5. It is hidden parameter

This improves the scan rate for propagation jobs to every second, rather than every 5 seconds.

log_archive_dest_1

Where archive logs will be stored

You should have enough space on this destination

sga_max_size=1024M

While testing I put this size

sga_target=800M

While testing I put this size

Doing the work now:

I think we have enough info to work now, I’ll provide info step by step and if there is some explanation required, it will be provided briefly.

1- First we need to create the stream admin user on the source/capture site (TEST), do the following:

--SOURCE SITE

SQL> CONN SYS/TEST@TEST AS SYSDBA
Connected.
SQL> CREATE USER STRMADMIN IDENTIFIED BY STRMADMIN DEFAULT
2 TABLESPACE STREAMS_TBS
3 TEMPORARY TABLESPACE TEMP
4 QUOTA UNLIMITED ON STREAMS_TBS;
User created.

SQL> GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
Grant succeeded.

SQL> execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
PL/SQL procedure successfully completed.

2- Check the parameters and change their values if required

SQL> show parameter _job_queue_interval
SQL> ALTER SYSTEM SET "_job_queue_interval" = 1 SCOPE=SPFILE;
System altered.
SQL> show parameter COMPATIBLE
NAME TYPE VALUE

------------------------------------ ----------- ------------------------------

compatible string 10.2.0.1.0

SQL> show parameter GLOBAL_NAMES -- it should be true, set if it is not in your env.

NAME TYPE VALUE

------------------------------------ ----------- ------------------------------

global_names boolean TRUE

SQL> show parameter JOB_QUEUE_PROCESSES

NAME TYPE VALUE

------------------------------------ ----------- ------------------------------

job_queue_processes integer 10

Set the log_archive_dest_1 value to you desired location where you want to store the archive logs for source. I’ve set to my location other than oracle home.

SQL> show parameter LOG_ARCHIVE_DEST

NAME TYPE VALUE

------------------------------------ ----------- ------------------------------

log_archive_dest string

log_archive_dest_1 string location=D:\ARCHIVELOG_DEST\DEST_1

log_archive_dest_10 string

log_archive_dest_2 string

log_archive_dest_3 string

log_archive_dest_4 string

log_archive_dest_5 string

log_archive_dest_6 string

log_archive_dest_7 string

log_archive_dest_8 string

NAME TYPE VALUE

------------------------------------ ----------- ------------------------------

log_archive_dest_9 string

log_archive_dest_state_1 string enable

log_archive_dest_state_10 string enable

log_archive_dest_state_2 string enable

log_archive_dest_state_3 string enable

log_archive_dest_state_4 string enable

log_archive_dest_state_5 string enable

log_archive_dest_state_6 string enable

log_archive_dest_state_7 string enable

log_archive_dest_state_8 string enable

log_archive_dest_state_9 string enable

SQL> show parameter LOG_ARCHIVE_DEST_STATE

NAME TYPE VALUE

------------------------------------ ----------- ------------------------------

log_archive_dest_state_1 string enable

log_archive_dest_state_10 string enable

log_archive_dest_state_2 string enable

log_archive_dest_state_3 string enable

log_archive_dest_state_4 string enable

log_archive_dest_state_5 string enable

log_archive_dest_state_6 string enable

log_archive_dest_state_7 string enable

log_archive_dest_state_8 string enable

log_archive_dest_state_9 string enable

SQL> show parameter PARALLEL_MAX_SERVERS

NAME TYPE VALUE

------------------------------------ ----------- ------------------------------

parallel_max_servers integer 80

SQL> show parameter REMOTE_ARCHIVE_ENABLE

NAME TYPE VALUE

------------------------------------ ----------- ------------------------------

remote_archive_enable string true

SQL> show parameter SGA_TARGET
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 584M
SQL> show parameter sga_max_size
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_max_size big integer 584M
SQL> alter system set sga_max_size=1024M scope=spfile;
System altered.
3- Now you should Verify that supplemental logging has been specified at the source either at the database level or for the individual replicated table. As I already have said that capture process uses logminer to mine redo logs to format LCR, if the supplemental logging is enable database can store additional data for logmining.
You can check the database level logging with the help of following SQL statement
SQL> SELECT supplemental_log_data_pk, supplemental_log_data_ui FROM V$DATABASE;
SUP SUP
--- ---
YES YES
You can check the table level logging with the following SQL statement
SQL> SELECT supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_data_fk FROM dba_capture_prepared_tables UNION
SELECT supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_data_fk FROM dba_capture_prepared_schemas UNION
SELECT supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_data_fk FROM dba_capture_prepared_database;
You can check the supplemental log groups with following SQL statement
SQL> Select log_group_name, column_name, position
2 from dba_log_group_columns where table_name = 'DEPARTMENTS' and owner='HR';
Its quite possible that in your environment supplemental logging is not enabled, but in my case it is already enable as I’m using logminer. If it is not enabled in your environment, enable it with the following statements.
SQL> Alter database add supplemental log data;
Database altered.
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL,PRIMARY KEY,UNIQUE,FOREIGN KEY) COLUMNS;
Database altered.
SQL> select SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI,SUPPLEMENTAL_LOG_DATA_FK,
2 SUPPLEMENTAL_LOG_DATA_all from v$database;
SUPPLEME SUP SUP SUP SUP
-------- --- --- --- ---
YES YES YES YES YES
4- Shutdown and startup your database (Source)
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

--TARGET SITE

5- Now you will perform the following steps at the target site.

SQL> conn sys/APPLYTEST@APPLYTEST as sysdba

Connected.

SQL> create user STRMADMIN identified by STRMADMIN;

User created.

SQL> ALTER USER STRMADMIN DEFAULT TABLESPACE USERS

2 TEMPORARY TABLESPACE TEMP

3 QUOTA UNLIMITED ON USERS;

User altered.

SQL> GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;

Grant succeeded.

SQL> execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');

PL/SQL procedure successfully completed.

SQL> show parameter "_job_queue_interval"

SQL> ALTER SYSTEM SET "_job_queue_interval" = 1 SCOPE=SPFILE;

System altered.

SQL> show parameter COMPATIBLE
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
compatible string 10.2.0.1.0
SQL> show parameter JOB_QUEUE_PROCESSES
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
job_queue_processes integer 10
SQL> show parameter sga_max_size
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_max_size big integer 584M
SQL> alter system set sga_max_size=1024M scope=spfile;
System altered.
SQL> show parameter REMOTE_ARCHIVE_ENABLE
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
remote_archive_enable string true
SQL> show parameter GLOBAL_NAMES
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
global_names boolean FALSE
SQL> alter system set global_names=true scope=both;
System altered.
SQL> SELECT supplemental_log_data_pk, supplemental_log_data_ui FROM V$DATABASE;
SUP SUP
--- ---
NO NO
SQL> Alter database add supplemental log data;
Database altered.
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL,PRIMARY KEY,UNIQUE,FOREIGN KEY) COLUMNS;
Database altered.
SQL> select SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI,SUPPLEMENTAL_LOG_DATA_FK,
2 SUPPLEMENTAL_LOG_DATA_all from v$database;
SUPPLEME SUP SUP SUP SUP
-------- --- --- --- ---
YES YES YES YES YES
SQL> commit;
Commit complete.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> conn sys/APPLYTEST@APPLYTEST as sysdba
Connected.
SQL> startup
6- Following steps to configure streams. Be careful about the connecting user ie; sys or strmadmin
SQL> conn sys/test@test as sysdba
Connected.
STEP 1.- Create the streams queue and the database links that will be used for propagation.
SQL> connect STRMADMIN/STRMADMIN@test
Connected.
SQL> BEGIN
2 DBMS_STREAMS_ADM.SET_UP_QUEUE(
3 queue_table => 'STREAMS_QUEUE_TABLE',
4 queue_name => 'STREAMS_QUEUE',
5 queue_user => 'STRMADMIN');
6 END;
7 /
PL/SQL procedure successfully completed.
SQL> conn sys/test@test as sysdba
Connected.
SQL> create public database link APPLYTEST using 'APPLYTEST';
Database link created.
SQL> connect STRMADMIN/STRMADMIN@test
Connected.
SQL> select * from global_name ;
GLOBAL_NAME
--------------------------------------------------------------------------------
TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM
SQL> conn sys/APPLYTEST@APPLYTEST as sysdba
Connected.
SQL> select * from global_name ;
GLOBAL_NAME
--------------------------------------------------------------------------------
APPLYTEST.REGRESS.RDBMS.DEV.US.ORACLE.COM
NOTE: global_names parameter for the participating databases must be TRUE, if it is not, issue the following statement on each participating database ie; TEST and APPLYTEST for this write up.
SQL> conn sys/test@test as sysdba
SQL> show parameter global_names
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
global_names boolean FALSE
SQL> alter system set global_names=TRUE;
System altered.
SQL> show parameter global_names
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
global_names boolean TRUE
SQL> create public database link "APPLYTEST.REGRESS.RDBMS.DEV.US.ORACLE.COM" using 'APPLYTEST';
Database link created.
NOTE:If your global_names is TRUE, then the name of the database link must be exactly same as the global_name of your remote database. Further more here ‘APPLYTEST’ is the service name in tnsname.ora pointing to the TARGET site, so change your tnsnames.ora accordingly.
SQL> connect STRMADMIN/STRMADMIN@test
Connected.
SQL> create database link "APPLYTEST.REGRESS.RDBMS.DEV.US.ORACLE.COM" connect to strmadmin identified by strmadmin;
Database link created.
STEP 2.- Connect as the Streams Administrator in the target site and create the streams queue
SQL> connect STRMADMIN/STRMADMIN@APPLYTEST
Connected.
SQL> BEGIN
2 DBMS_STREAMS_ADM.SET_UP_QUEUE(
3 queue_table => 'STREAMS_QUEUE_TABLE',
4 queue_name => 'STREAMS_QUEUE',
5 queue_user => 'STRMADMIN');
6 END;
7 /
PL/SQL procedure successfully completed.
STEP 3.- add apply rules for the Schema at the destination database
SQL> BEGIN
2 DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
3 schema_name => 'HR',
4 streams_type => 'APPLY ',
5 streams_name => 'STREAM_APPLY',
6 queue_name => 'STRMADMIN.STREAMS_QUEUE',
7 include_dml => true,
8 include_ddl => true,
9 source_database => 'TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM');
10 END;
11 /
PL/SQL procedure successfully completed.
STEP 4.- Add capture rules for the schema HR at the source database
SQL> CONN STRMADMIN/STRMADMIN@TEST
Connected.
SQL> BEGIN
2 DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
3 schema_name => 'HR',
4 streams_type => 'CAPTURE',
5 streams_name => 'STREAM_CAPTURE',
6 queue_name => 'STRMADMIN.STREAMS_QUEUE',
7 include_dml => true,
8 include_ddl => true,
9 source_database => 'TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM');
10 END;
11 /
PL/SQL procedure successfully completed.
STEP 5.- Add propagation rules for the schema HR at the source database. This step will also create a propagation job to the destination database
SQL> BEGIN
2 DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
3 schema_name => 'HR',
4 streams_name => 'STREAM_PROPAGATE',
5 source_queue_name => 'STRMADMIN.STREAMS_QUEUE',
6 destination_queue_name => 'STRMADMIN.STREAMS_QUEUE@APPLYTEST.REGRESS.RDBMS.DEV.US.ORACLE.COM',
7 include_dml => true,
8 include_ddl => true,
9 source_database => 'TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM');
10 END;
11 /
PL/SQL procedure successfully completed.
STEP 6.- We may need to do export, import and instantiation of tables from Source to Destination Database ; if the objects are not present in the destination database, perform an export of the objects from the source database and import them into the destination database
Export from the Source Database:
Specify the OBJECT_CONSISTENT=Y clause on the export command.
By doing this, an export is performed that is consistent for each individual object at a particular system change number (SCN).
SQL> HOST exp USERID=SYSTEM/test@test OWNER=HR FILE=hr.dmp LOG=hr_exp.log OBJECT_CONSISTENT=Y STATISTICS = NONE
Import into the Destination Database:
Specify STREAMS_INSTANTIATION=Y clause in the import command.
By doing this, the streams metadata is updated with the appropriate information in the destination database corresponding to the SCN that is recorded in the export file
SQL> HOST imp USERID=SYSTEM/applytest@applytest FULL=Y CONSTRAINTS=Y FILE=hr.dmp IGNORE=Y COMMIT=Y LOG=hr_imp.log STREAMS_INSTANTIATION=Y
If the objects are already present in the destination database, there are two ways of instantiating the objects at the destination site.
By means of Metadata-only export/import :
Specify ROWS=N during Export
Specify IGNORE=Y during Import along with above import parameters.
By Manually instantiating the objects. I USED THIS OPTION
Get the Instantiation SCN at the source database:
SQL> connect STRMADMIN/STRMADMIN@TEST
Connected.
SQL> set serveroutput on
SQL> DECLARE
2 iscn NUMBER; -- Variable to hold instantiation SCN value
3 BEGIN
4 iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
5 DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' || iscn);
6 END;
7 /
Instantiation SCN is: 218492643
PL/SQL procedure successfully completed.
Instantiate the objects at the destination database with this SCN value.
The SET_TABLE_INSTANTIATION_SCN procedure controls which LCRs for a table are to be applied by the apply process. If the commit SCN of an LCR from the source database is less than or equal to this instantiation SCN, then the apply process discards the LCR. Else, the apply process applies the LCR.
SQL> connect STRMADMIN/STRMADMIN@APPLYTEST
Connected.
SQL> BEGIN
2 DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN(
3 SOURCE_SCHEMA_NAME => 'HR',
4 SOURCE_DATABASE_NAME => 'TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM',
5 RECURSIVE => TRUE,
6 INSTANTIATION_SCN => &iscn );
7 END;
8 /
Enter value for iscn: 218492643
old 6: INSTANTIATION_SCN => &iscn );
new 6: INSTANTIATION_SCN => 218492643 );
PL/SQL procedure successfully completed.
Note: In 10g recursive=true parameter of BMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN
is used for instantiation. If using the parameter recursive true with SET_SCHEMA_INSTANTIATION_SCN then you need a dblink on the destination database to the source database with the same name as the source database.
SQL> conn sys/APPLYTEST@APPLYTEST as sysdba
Connected.
SQL> create public database link "TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM" using 'TEST'; -- Here TEST is the service name in your tnsnames.ora, change your tnsnames accordingly.
Database link created.
SQL> connect STRMADMIN/STRMADMIN@APPLYTEST
Connected.
SQL> create database link "TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM" connect to strmadmin identified by strmadmin;
Database link created.
STEP 7.- Specify an 'APPLY USER' at the destination database. This is the user who would apply all statements and DDL statements. The user specified in the APPLY_USER parameter must have the necessary privileges to perform DML and DDL changes on the apply objects.
SQL> conn strmadmin/strmadmin@APPLYTEST
Connected.
SQL> BEGIN
2 DBMS_APPLY_ADM.ALTER_APPLY(
3 apply_name => 'STREAM_APPLY',
4 apply_user => 'HR');
5 END;
6 /
PL/SQL procedure successfully completed.
STEP 8.- Set stop_on_error to false so apply does not abort for every error; then, start the Apply process on the destination
SQL> conn strmadmin/strmadmin@APPLYTEST
Connected.
SQL> BEGIN
2 DBMS_APPLY_ADM.SET_PARAMETER(
3 apply_name => 'STREAM_APPLY',
4 parameter => 'disable_on_error',
5 value => 'n');
6 END;
7 /
PL/SQL procedure successfully completed.
SQL> DECLARE
2 v_started number;
3 BEGIN
4 SELECT decode(status, 'ENABLED', 1, 0) INTO v_started
5 FROM DBA_APPLY WHERE APPLY_NAME = 'STREAM_APPLY';
6 if (v_started = 0) then
7 DBMS_APPLY_ADM.START_APPLY(apply_name => 'STREAM_APPLY');
8 end if;
9 END;
10 /
PL/SQL procedure successfully completed.
STEP 8.- Set up capture to retain 7 days worth of logminer checkpoint information, then start the Capture process on the source
SQL> conn strmadmin/strmadmin@TEST
Connected.
SQL> BEGIN
2 DBMS_CAPTURE_ADM.ALTER_CAPTURE(
3 capture_name => 'STREAM_CAPTURE',
4 checkpoint_retention_time => 7);
5 END;
6 /
PL/SQL procedure successfully completed.
SQL> begin
2 DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'STREAM_CAPTURE');
3 end;
4 /
PL/SQL procedure successfully completed.
6- Perform changes in tables belonging to HR on the source site and check that these are applied on the destination
SQL> connect HR/HR@TEST
insert into HR.DEPARTMENTS values (99,'OTHER',205,1700);
commit;
CREATE TABLE T(C1 NUMBER);
Confirm the insert has been done on HR.DEPARTMENTS at destination and T table has been created.
SQL> connect HR/HR@APPLYTEST
select * from HR.DEPARTMENTS where department_id=99;
desc HR.T;

Related Posts:

One-Way SCHEMA Level Streams Replication
Remove the local Streams configuration 

No comments: