You can migrate your database across platform.So it is up to you which method you would like to choose as all are the best. Check endian format select t.endian_format from v$transportable_platform t, v$database d where t.platform_name = d.platform_name; 1. Export / Import to include the use of Datapump facilities. all versions support Export/Import but for Datapump 10.1.0.2 or higher is required 2 Transportable Tablespaces 10G or Later 3 RMAN Convert Database functions. 10G or Later 4. Streams Replication 5. Create Table As Select (CTAS) 6. Oracle Golden Gate expdp system/password SCHEMAS=abc ESTIMATE_ONLY=y expdp system full=y rows=n ESTIMATE_ONLY=y Using Oracle Golden Gate PART 1: Installing Golden Gate and configuring the environment Source database platform - Oracle 11G No RAC, ASM: IBM AIX 5.3.00 5300-07 (64-bit) - Golden Gate zip file from oracle.edelivery.com:V26229-01.zip Target database platform - Oracle 11G RAC, ASM: RHEL 6 (64-bit):Golden Gate zip file from oracle.edelivery.com:V26185-01.zip Step 1: Down load the zip files and install it under product/goldengate directory. For example /u01/app/oracle/product/goldengate. Unzip it unzip V26229-01.zip Untar ggs_AIX_ppc_ora11g_64bit using command tar -xvof ggs_AIX_ppc_ora11g_64bit.tar Step 2: Set the ORACLE_SID of the source database. $export ORACLE_SID=SOURCEDB1 Step3: Create tablespace goldengate (size 300MB or greater) $sqlplus / as sysdba SYS@SOURCEDB1 SQL>create tablespace goldengate datafile '+DATA' size 300m; Step 4: Creates database user goldengate and grant appropriate privilages. SYS@SOURCEDB1 SQL>create user goldengate identified by goldengate default tablespace goldengate temporary tablespace temp; User created. SYS@SOURCEDB1 SQL>grant connect,resource to goldengate; Grant succeeded. SYS@SOURCEDB1 SQL>grant select any dictionary, select any table to goldengate; Grant succeeded. SYS@SOURCEDB1 SQL>grant create table to goldengate; Grant succeeded. SYS@SOURCEDB1 SQL>grant flashback any table to goldengate; Grant succeeded. SYS@SOURCEDB1 SQL>grant execute on dbms_flashback to goldengate; Grant succeeded. SYS@SOURCEDB1 SQL>grant execute on utl_file to goldengate; Grant succeeded. SYS@SOURCEDB1 SQL>grant dba to goldengate; --- Not required Grant succeeded. Step 5: Make sure that source database is in archive log mode. SYS@SOURCEDB1 SQL>select log_mode from v$database; Step 6: Enable supplemental log. You can enable it for database or for a specific table. SYS@SOURCEDB1 SQL>select SUPPLEMENTAL_LOG_DATA_MIN,SUPPLEMENTAL_LOG_DATA_PK,SUPPLEMENTAL_LOG_DATA_UI,SUPPLEMENTAL_LOG_DATA_FK, SUPPLEMENTAL_LOG_DATA_ALL,SUPPLEMENTAL_LOG_DATA_PL from v$database; SUPPLEME SUP SUP SUP SUP SUP -------- --- --- --- --- --- NO NO NO NO NO NO SYS@SOURCEDB1 SQL>ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS; for a specific table SYS@SOURCEDB1 SQL>ALTER TABLE admn.iostats ADD SUPPLEMENTAL LOG DATA (ALL) Columns ; Step 7: Run ./ggsci from /u01/app/oracle/product/goldengate directory. Step 8: Create subdirectories using command GGSCI (va-sourcedb) 1> CREATE SUBDIRS Creating sub directories under current directory /u01/ARBPRDDB/app/oracle/product/goldengate Parameter files /u01/app/oracle/product/goldengate/dirprm: created Report files /u01/app/oracle/product/goldengate/dirrpt: created Checkpoint files /u01/app/oracle/product/goldengate/dirchk: created Process status files /u01/app/oracle/product/goldengate/dirpcs: created SQL script files /u01/app/oracle/product/goldengate/dirsql: created Database definitions files /u01/app/oracle/product/goldengate/dirdef: created Extract data files /u01/app/oracle/product/goldengate/dirdat: created Temporary files /u01/app/oracle/product/goldengate/dirtmp: created Veridata files /u01/app/oracle/product/goldengate/dirver: created Veridata Lock files /u01/app/oracle/product/goldengate/dirver/lock: created Veridata Out-Of-Sync files /u01/app/oracle/product/goldengate/dirver/oos: created Veridata Out-Of-Sync XML files /u01/app/oracle/product/goldengate/dirver/oosxml: created Veridata Parameter files /u01/app/oracle/product/goldengate/dirver/params: created Veridata Report files /u01/app/oracle/product/goldengate/dirver/report: created Veridata Status files /u01/app/oracle/product/goldengate/dirver/status: created Veridata Trace files /u01/app/oracle/product/goldengate/dirver/trace: created Stdout files /u01/app/oracle/product/goldengate/dirout: created Step 9: Repeat Step 1-5, 7 and 8 for the TARGET database. PART 2:Configuring the Manager Configuring the MANAGER process. Step 1: SourceMachine:/u01/app/oracle/product/goldengate> ./ggsci Oracle GoldenGate Command Interpreter for Oracle Version 11.1.1.1 OGGCORE_11.1.1_PLATFORMS_110421.2040 AIX 5L, ppc, 64bit (optimized), Oracle 11g on Apr 22 2011 03:09:27 Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved. Step 2:GGSCI (SourceMachine) 1> start manager ERROR: Parameter file /u01/app/oracle/product/goldengate/dirprm/mgr.prm does not exist. Step 3:GGSCI (SourceMachine) 2> exit SourceMachine:/u01/app/oracle/product/goldengate> vi /u01/app/oracle/product/goldengate/dirprm/mgr.prm "/u01/app/oracle/product/goldengate/dirprm/mgr.prm" [New file] PORT 7809 USERID goldengate, PASSWORD goldengate PURGEOLDEXTRACTS /u01/app/oracle/product/goldengate/dirdat/ex, USECHECKPOINTS :wx Save file. Step 4:SourceMachine:/u01/app/oracle/product/goldengate> ./ggsci Oracle GoldenGate Command Interpreter for Oracle Version 11.1.1.1 OGGCORE_11.1.1_PLATFORMS_110421.2040 AIX 5L, ppc, 64bit (optimized), Oracle 11g on Apr 22 2011 03:09:27 Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved. Step 5:GGSCI (SourceMachine) 1> stop manager Manager is already stopped. Step 6:GGSCI (SourceMachine) 2> start manager Manager started. Step 7:GGSCI (SourceMachine) 3> info all Program Status Group Lag Time Since Chkpt MANAGER RUNNING Step 8: REPEAT the above steps for TARGET database machine. PART 3: Performing the initial data load On Source Machine Step 1: Create the Initial data extract process 'load1' GGSCI (SOURCEMACHINE) 5> ADD EXTRACT load1, SOURCEISTABLE EXTRACT added. Since this is a one time data extract task, the source of the data is not the transaction log files of the RDBMS (in this case the online and archive redo log files) but the table data itself, that is why the keyword SOURCEISTABLE is used. Step 2:Create the parameter file for the extract group load1 ============================================================================================================================= EXTRACT: name of the extract group USERID/PASSWORD: the database user which has been configured earlier for Extract ( this user is created in the source database) RMTHOST: This will be the IP address or hostname of the target system MGRPORT: the port where the Manager process is running TABLE: specify the table which is being extracted and replicated. This can be specified in a number of ways using wildcard characters to include or exclude tables as well as entire schemas. ============================================================================================================================= GGSCI (SOURCEMACHINE) 6> EDIT PARAMS load1 EXTRACT load1 USERID goldengate, PASSWORD goldengate RMTHOST TARGETMACHINE hostname(or IPAddress), MGRPORT 7809 RMTTASK replicat, GROUP load2 TABLE admn.iostats; On Target Machine Step 3: Create the initial data load task ‘load2' Since this is a one time data load task, we are using the keyword SPECIALRUN GGSCI (TARGETMACHINE) 1> ADD REPLICAT load2, SPECIALRUN REPLICAT added. Step 4: Create the parameter file for the Replicat group, load2 ============================================================================================================================= REPLICAT: name of the Replicat group created for the initial data load USERID/PASSWORD: database credentials for the Replicat user (this user is created in the target database) ASSUMETARGETDEFS: this means that the source table structure exactly matches the target database table structure MAP: with GoldenGate we can have the target database structure entirely differ from that of the source in terms of table names as well as the column definitions of the tables. This parameter provides us the mapping of the source and target tables which is same in this case ============================================================================================================================= GGSCI (TARGETMACHINE) 2> EDIT PARAMS load2 /u01/app/oracle/product/goldengate/dirprm/rep4.prm [New file] REPLICAT load2 USERID goldengate, PASSWORD goldengate ASSUMETARGETDEFS MAP admn.iostats, TARGET admn.iostats; On Source Step 5: SQL> select count(*) from admn.iostats; COUNT(*) 270757000 On Target Step 6: SQL> select count(*) from admn.iostats; COUNT(*) 0 On Source Step 7: Start the initial load data extract task on the source system We now start the initial data load task load 1 on the source. Since this is a one time task, we will initially see that the extract process is runningand after the data load is complete it will be stopped. We do not have to manually start the Replicat process on the target as that is done when the Extract task is started on the source system. On Source Step 8: GGSCI (SOURCEMACHINE) 16> START EXTRACT load1 Sending START request to MANAGER EXTRACT LOAD1 starting Step 9: GGSCI (SOURCEMACHINE) 28> info extract load1 EXTRACT LOAD1 Last Started 2010-02-11 11:33 Status RUNNING Checkpoint Lag Not Available Log Read Checkpoint Table admn.iostats 2011-05-20 14:33:16 Record 757000 Task SOURCEISTABLE Step 10: GGSCI (SOURCEMACHINE) 29> info extract load1 EXTRACT LOAD1 Last Started 2010-02-11 11:33 Status STOPPED Checkpoint Lag Not Available Log Read Checkpoint Table admn.iostats 2011-05-11 21:11:18 Record 270757000 Task SOURCEISTABLE On Target Step 11: SQL> select count(*) from products; COUNT(*) 270757000 PART 4: Setting up online Change Synchronization Oracle GoldenGate supports data capture from an Oracle Database using ASM. This section applies to Oracle ASM 11g Release 2. A few additional setup steps are required in order to use ASM for Oracle redo and/or archive logs: · Extract requires a connection into an ASM instance to be able to read the transaction logs. The connection has to go through the Oracle Database listener and because the ASM instance is only mounted (not open) an entry for the ASM instance must be added to the listener configuration file in order to let incoming connections go through. See the Oracle GoldenGate for Windows and Unix Administrator Guide as well as Oracle Support note 340277.1 for more details. · The tnsnames.ora file(s) or LDAP directory must include entries to the ASM instance(s) in order for connect strings to resolve connection requests. · The Oracle GoldenGate extract parameter file must include the following line: · TRANLOGOPTIONS ASMUSER An ASM instance is not a regular database instance and does not support the concept of regular database users. As a result the user Oracle GoldenGate uses to connect to ASM is always an administrative user (SYSASM) which would enable startup and shutdown of the instance. The connection will not fail if the instance is down, but Oracle GoldenGate extract will not be able to start because any queries issued against the ASM instance will fail. If you use Oracle GoldenGate to extract from an Oracle RAC database that uses ASM to store its logs, and your connection to ASM could be routed to any ASM instance in the cluster, then you may run into the situation that extract connects to an ASM instance that happens to be down. In that case extract will crash, although the database may still be running fine on other servers. This will cause the extract process to fall behind. To prevent this scenario with Oracle ASM 11g Release 2 follow the following steps: 1. Use another dependency to a local resource ora.asm. This resource is available if the ASM instance is running. This introduces a slight change to the crsctl add resourcecommand (changes highlighted): 2. In the extract parameter file you can include the following TRANLOGOPTIONS parameter (example): 3. Finally make sure that the connect string @asm always connects to the local ASM instance. I.e. the ASM entry in the tnsnames.ora specifies a different connection on different nodes. For example on node 1: Using Transportable Tablespaces You can move large amounts of data between databases just simply by moving data files from one database to another. You copy all the data files from the source database to the target database and import the data dictionary information about the tablespaces from the source database to the target database. You use transportable tablespaces mainly in the context of a data warehouse, some of the important features are Moving data from the source database (OLTP) into a data warehouse database Moving data from a staging database into a data warehouse database Moving data from a data warehouse to a data mart Performing tablespace point-in-time recovery Archiving historical data Transporting a tablespace There are 4 steps to transport a tablespace Grant the necessary privileges Make the tablespace is transportable Generate the transportable tablespace set (data dictionary information) Copy the data files to the target server Perform the tablespace import Privilege required to check tablespace grant EXECUTE_CATALOG_ROLE to vallep; Make sure a tablespace is transportable execute dbms_tts.transport_set_check('test01, test02', true); select * from transport_set_violation; Note: if there are any errors then check with Oracle to see how to get around them Generate the transportable tablespace set alter tablespace test01 read only; alter tablespace test02 read only; expdp vallep/vallep directory=data_pump_dir dumpfile=test.dmp transport_tablespace=test01,test02 include=triggers,constraints,grant Note: The tablespaces must be in read only mode and only metadata (data dictionary data) will be contained in the data pump export Copy the Data to target server Now copy all the data files and the data pump export to the target server Tablespace import impdp system/system dumpfile=test.dmp transport_datafiles='test01.dbf','test02.dbf' directory=data_pump_dir Transporting tablespaces on different platforms You can transport tablespace between different server architectures, there is only one requirement both platform must have the same endian format. endian format refers to the byte ordering of the file system, they can be one of two types big or small, if they differ you must convert the data files to the format you require. To check what format you have Check endian format select t.endian_format from v$transportable_platform t, v$database d where t.platform_name = d.platform_name; The steps to transport a tablespace that are a different endian format are Ensure that the tablespaces are self-contained Make the tablespaces read-only Export the metadata using Data Pump Export Convert the data files to match the endian format Copy the converted data files to the target system Use the Data Pump Import utility to import the metadata Source Server Privilege required to check tablespace grant EXECUTE_CATALOG_ROLE to vallep; Make sure a tablespace is transportable execute dbms_tts.transport_set_check('test01, test02', true); select * from transport_set_violation; Note: if there are any errors then check with Oracle to see how to get around them make the tablespace read only alter tablespace test01 read only; alter tablespace test02 read only; Generate the transportable tablespace set expdp vallep/vallep directory=data_pump_dir dumpfile=test.dmp transport_tablespace=test01,test02 include=triggers,constraints,grant Note: remember we are only exporting the metadata Convert the tablespace on the source server (this can also be done on the target server) convert the data files (using CONVERT) rman> convert tablespace test01 to platform 'HP-UX (64bit)' format '/temp/%U'; Note: Oracle will tell you the new file name of the converted file convert the data files (using DB_FILE_NAME_CONVERT rman> convert tablespace test01 to platform 'HP-UX (64-bit)' db_file_name _convert = 'c:\oracle\test01.dbf','c:\convert\test01.dbf'; Copy the Data Now copy all the data files and the data pump export to the target server if not already copied Target Server Tablespace import impdp system/system dumpfile=test.dmp transport_datafiles='test01.dbf','test02.dbf' directory=data_pump_dir |