Please see my other blog for Oracle EBusiness Suite Posts - EBMentors

Search This Blog

Note: All the posts are based on practical approach avoiding lengthy theory. All have been tested on some development servers. Please don’t test any post on production servers until you are sure.

Thursday, December 05, 2013

Exadata: Migration Considerations and Strategies

Once the Exadata Machine is ready, you are ready to migrate your data from your legacy hardware to Exadata. Please note that there is no difference between data segments stored in Exadata Storage Server cells and those in conventional storage unless Exadata Hybrid Columnar Compression (EHCC) is being used.  So you can use the same tools and techniques that you normally use for migration of non-Exadata systems but with some considerations. 
You cans use many methods, tools, and techniques for migrating your database from legacy hardware to Exadata, but generally speaking they fall into two broad categories: 

Logical  Migration: Extracting the data from one database and loading it into another one. 
Physical Migration:  Lifting the database, block by block, from one database server and moving it to another.

Logical  Migration - Extracting and Loading:
Extracting and Loading strategy is the simplistic but  requires the most downtime of all the migration strategies because for the duration of the migration, all DML activity must be brought to a stop. This strategy can be advantageous when you want to have more control over you migration. For example to employ HCC to compress tables as they are loaded in, would be relatively a simple task.

Logical  Migration - Tools/Considerations
DataPump

  • You can use Data Pump for moving large quantities of data between databases. You can dump the contents of an entire database or, more selectively, by schema or by table.
  • Using the COMPRESSION=ALL option Data Pump reduces the size of export significantly.
  • By default Data Pump does not guarantee the read consistency of your export. To export a read-consistent image of your database you must use either the FLASHBACK_SCN or the FLASHBACK_TIME parameter. If you use FLASHBACK_TIME, Data Pump looks up the nearest System Change Number (SCN) corresponding to the time you specified and exports all data as of that SCN. FLASHBACK_TIME="to_timestamp('05-DEC-2013 10:00:00','DD-MON-YYYY HH24:MI:SS')"
  • If you choose to use FLASHBACK_SCN, you can get the current SCN by query select current_scn from v$database;
  • The PARALLEL parameter instructs Data Pump to split the work up into multiple parts and run them concurrently. PARALLEL can vastly improve the performance of the export process.
  • Schema-level import is usually preferable when migrating databases. It allows you to break the process up into smaller, more manageable parts.
  • If you choose not to do a full database import, be aware that system objects including roles, public synonyms, profiles, public database links, system privileges, and others will not be imported. You will need to extract the DDL for these objects using the SQLFILE parameter and a FULL=Y import.
  • Data Pump only creates tablespaces automatically when a full database import is done.


Export and Import
If the database you are migrating to Exadata is a release prior to version 10g, Data Pump won’t be an option. Instead you will need to work with its predecessors, Export (exp) and Import (imp).

  • You can parallelize database exports by running concurrent schema exports
  • You cannot exploit Exadata’s HCC compression using Export/Import
  • The Export/Import approach also does not support Transparent Data Encryption (TDE). If your database uses TDE you will need to use Data Pump to migrate this data.
When to Use Data Pump or Export/Import
  • Better suited for smaller OLTP databases as Data Pump and Export/Import are volume-sensitive operations. Time it takes to move your database will be directly tied to its size and the bandwidth of your network so for OLTP applications this is downtime.
  • Well suited for migrating large DW databases, where read-only data is separated from read-write data.
Copy Data Using DB Link
  • DB Link can be used when extracting and copying very large amounts of data (even in terabytes) 
  • DataPump also have the NETWORK_LINK option and the ability to transfer data directly over database links but it uses conventional INSERT AS SELECTs, not direct path inserts thus much slower, generate lots of redo and undo, and possibly run out of undo space. Remember also, conventional path IAS does not compress data with HCC compression.
  • Use your own direct path IAS statements with APPEND hints to get the benefits of direct path loads and maximum performance out of file-less transfer over database links.
  • it’s a good idea to transfer only the large tables over database links and use Export/Import or Data Pump for migrating everything else.
CTAS

  • Loading data using CTAS allows you to sort data as it is loaded, which improves the efficiency of Exadata’s storage indexes.
  • Ordered loading can increase the compression ratio of Exadata Hybrid Columnar Compression as well.
  • Perform data load in parallel. Remember parallel Query and DDL are enabled by default but parallel DML is not enabled. You can enable it as below
    SQL> ALTER SESSION ENABLE PARALLEL DML;
    SQL> SELECT pq_status, pdml_status, pddl_status, pdml_enabled2> FROM v$session WHERE sid = SYS_CONTEXT('userenv','sid');
    PQ_STATUS PDML_STATUS PDDL_STATUS PDML_ENABLED
    --------- ----------- ----------- ---------------
    ENABLED ENABLED ENABLED YES

Improving Throughput

  • In order to have expected throughput, especially when the source database server isn’t in the same LAN as the target, the underlying TCP protocol must be tuned for high throughput data transfer. Your source server must be able to send data as fast as your Exadata cluster has to receive it, and any networking equipment (switches, routers) in between must also be able to handle that, in addition to all other traffic that has to flow through them.

  • You should tune the throughput also by configuration settings (arraysize,SDU, TCP Buffer size) to affect Oracle's data transfer speed for regular application connections. 
  • With database links, Oracle is smart enough to automatically set the fetch array size to the maximum, it transfers 32767 rows at a time. So no need to tune it from your side.
Replication Based Migration
Replication-based migration is done by creating a copy of the source database and then keeping it in sync by applying changes to the copy, or target database.


Physical replication (Redo Apply): shipping archived redo logs to the target where it is applied to the database using its internal recovery mechanisms.
Logical replication (SQL Apply) : Extracting changes to the source database as SQL statements and executed on the target database.

Replication Based Migration  - Tools/Considerations
Oracle Streams and Golden Gate
Both Oracle Streams and Golden Gate replicate data in much the same way. A copy of the source database is created (the target) and started up using Recovery Manager or by using Data Pump to instantiate SELECT schemasThen changes in the source database are extracted, or mined, from the redo logs. The changes are then converted to equivalent DML and DDL statements and executed in the target database through some steps Capture, Stage, and Apply.


  • Target database remains online and available for applications to use while replication is running. Any restrictions on the target are limited to the tables being replicated.
  • These replication tools have ability to do data transformation
  • Tables in the target database may be compressed using Exadata HCC.
  • Consider the limitation of data types with streams eg; BFILE, ROWID etc.
  • You may not get the best loading compression ratio compared to CTAS or IAS when using Golden Gate.
  • Endian differences support is available between source and target.
  • Provides very little downtime to switch over
  • Replicated tables must be keyed. If there is no primary key on the source table, you must provide a list of columns that provide uniqueness. If no such combination exists, the table cannot be replicated.
  • Data mining can heavily impact performance on the source database.
  • If replication breaks, fixing it can be messy. It is often easier to scrap the target and rebuild it.
  • Streams tends to require much more CPU resources than Golden Gate.


Logical Standby
Logical standby is another type of replication useful for database migration. Like Streams
and Golden Gate, changes to the source database are mined from the redo (or archived redo) and converted to SQL statements and executed on the target database. Logical standby is far more restrictive than Streams or Golden Gate as it instantiates from a physical, block-for-block, copy of the source database, several rules come into play. 
  • Logical standby allows read access to the standby database during replication.
  • Logical standby applies changes using conventional inserts so HCC compression will not be available.
  • Consider unsupported data types (same like streams and GG) eg; BFILE, ROWID.
  • Be sure to turn off NOLOGGING operations using the alter database force logging on source DB as NOLOGGING operations do not generate redo, which means they will not be replicated to the standby database.

Physical Migration

Process of creating a block-by-block copy of the source database (or parts of the database) and moving it to Exadata. It does not allow for any changes to be made to the target database, other than choosing not to migrate some unnecessary tablespaces.


Physical Migration - Considerations


  • Fastest way to migrate your database to Exadata
  • You will not be able to modify extent sizes for tables and indexes, alter your indexing strategy, implement partitioning, or apply HCC table compression. All these tasks must be done post-migration.
  • For all physical migration strategies, except TTS, the new Exadata database starts out as a single-instance database. Post-migration steps are needed to register the database and all its instances with Cluster Ready Services. 
  • The source Database version must be 11.2 and source platform must be certified for running Oracle 11.2 (OEL5, RHEL5,SLES10 or other certified Linux versions).
  • The source platform must be little-endian (ASM rebalance and cross-platform transportable database only). When using a hybrid solution creating a new database on Exadata and using cross-platform transportable tablespaces, the source database can be on different platforms with different endianness.

Physical Migration - Strategies
Backup and Restore
The backup and restore strategy uses Oracle’s Recovery Manager (RMAN) to create a full backup of the source database and then restore it to the Exadata platform. 

  • If the source database is on a big-endian platform, the datafiles can be converted during backup or restore process using RMAN’s convert database command.
  • If the database you need to migrate is large and the time to migrate is limited, use incremental backup and restore strategy to reduce downtime for the final switch to Exadata.
  • Enable block change tracking  on the source database before the incremental level 0 is
  • taken.
  • incremental method does not support endian format conversion.
  • Best suited for OLTP databases that don’t require partitioning and HCC compression.
  • Suitable for DW databases also that already run on a little-endian, 64-bit platform.

Transportable Tablespaces
Transportable tablespaces (TTS) can be used to migrate subsets of the source database to Exadata. To do this you will need a running database on Exadata to host these subsets of your database. 
  • Tablespaces must be put in read-only mode during the process.
  • Objects with underlying dependencies like materialized views and table partitions
  • are not transportable unless they are contained in the same set.
  • Before transporting encrypted tablespaces you must first copy the Oracle wallet to the target system and enable it for the target database. A Database can only have one Oracle wallet. If the database you are migrating to already has a wallet you will need to use Data Pump to export and import table data.
  • Database version differences are supported as long as the target database is of the same or higher version than the source database.
Physical Standby

In the physical standby strategy, the target database is instantiated from a full backup of the source database. The backup is restored to Exadata in the same way you would for the “Backup and Restore” strategy. Once the database is restored to Exadata, it is started in mount mode and kept in a continual state of recovery.
  • As archived redo logs are constantly being applied to the Standby database, conversion from big-endian to little-endian format is not supported.
ASM Rebalance
The ASM Rebalance strategy utilizes ASM’s built-in rebalancing feature to literally shift, block-by-block, off of current storage and onto Exadata’s high performance storage cell hardware with zero downtime to the database. It is not the fastest way to migrate to Exadata, and there are a lot of very strict configuration requirements, but once the system is configured the migration is fairly automated.

This method does not create a copy of your database. It migrates your current database at the storage level. 

Related Posts:

1 comment:

Unknown said...

Thank you for writing such a nice blog and sharing the knowledge :)