Golden Gate Recipe: configure Oracle12c integrated replicat
Contents
- 1 Introduction
- 2 Step 1: On Source and Target Install the Oracle Golden Gate software
- 3 Step 2 Configure Golden Gate for the source database
- 3.1 Step 2.1 Create the sample schema for replication
- 3.2 Step 2.2 Examine the schema for replication
- 3.3 Step 2.3 Enable Golden Gate and create the Golden Gate user
- 3.4 Step 2.4 Enable archive log mode and force logging
- 3.5 Step 2.5 Enable supplemental logging
- 3.6 Step 2.6 Enable support for sequences
- 3.7 Step 2.7 Configure SQL*Net connectivity
- 3.8 Step 2.8 Create Database Golden Gate database service
- 3.9 Step 2.9 Configure UNDO RETENTION
- 3.10 Step 2.10 Install the UTL_SPADV package
- 3.11 Step 2.11 Create a credential store for the Golden Gate user
- 3.12 Step 2.12 Configure the Streams Pool
- 3.13 Step 2.13 Create a backup of the database
- 4 Step 3 Configure Golden Gate for the target database
- 4.1 Step 3.1 Enable Golden Gate and create the Golden Gate user
- 4.2 Step 3.2 Enable archive log mode and force logging
- 4.3 Step 3.3 Enable support for sequences
- 4.4 Step 3.4 Configure SQL*Net connectivity
- 4.5 Step 3.5 Create Database Golden Gate database service
- 4.6 Step 3.6 Install the UTL_SPADV package
- 4.7 Step 3.7 Create a credential store for the Golden Gate user
- 4.8 Step 3.8 Configure the Streams Pool
- 4.9 Step 3.9 Create a backup of the database
- 5 Step 4 On the source database: configure the management, extract and pump process
- 6 Step 4.5 Instantiate the target database
- 6.1 Step 5.1 Background information =
- 6.2 Step 5.2 On the source database: check for open transactions
- 6.3 Step 5.3 On the source database: register the current SCN
- 6.4 Step 5.4 On the source database: create a directory for the datapump export files
- 6.5 Step 5.5 On the source database server: perform the datapump export
- 6.6 Step 5.6 On the target database server: transfer the dumpfiles from the source
- 6.7 Step 5.7 On the target: create the directory
- 6.8 Step 5.8 On the target: perform the import
- 7 Step 6 On the target: Configure the replication
Introduction
- In this recipe we will setup Golden Gate integrated replicat between 2 12c Release 2 Oracle (non-multitenant) databases. We will use Golden Gate software version 12.2.0.1
- Source environment: database prd12c on rob01db01
- Target environment: database trg12c on rob01db02
- The demo schema 'HR' will be replicated.
The process is as follows:
- First we will prepare the source and the target database for Golden Gate
- Then we will configure and start the Golden Gate processes manager, extract and pump on the source database and the manager process on the target database
- Then we will instantiate the target database (initial copy of the source database)
- Then we will configure and start the Golden Gate replicat process on the target.
Step 1: On Source and Target Install the Oracle Golden Gate software
In this step we will install the Golden Gate software on database servers rob01db01 and rob01db02.
Step 1.1 Download the Golden Gate Software
The required software can be downloaded from Oracle OTN. The software is downloaded as a zip file (fbo_ggs_Linux_x64_shiphome.zip).
Step 1.2 Upload and extract the software to the database servers
The software is staged in directory /u01/tmp:
[root@rob01db01 u02]# mkdir -p /u01/temp [root@rob01db01 u02]# chown oracle:oinstall /u01/temp [oracle@rob01db01 temp]$ cd /u01/temp [oracle@rob01db01 temp]$ ls fbo_ggs_Linux_x64_shiphome.zip [oracle@rob01db01 temp]$ unzip fbo_ggs_Linux_x64_shiphome.zip
Step 1.3 Install the software
Start an Xwindows sessions, logon as the oracle user and create the software home directory and install the software:
[root@rob01db01 u02] mkdir -p /u01/app/oracle/product/12.2.0.1/gg_home1 [oracle@rob01db01 ~]$ cd /u01/temp/fbo_ggs_Linux_x64_shiphome/Disk1 [oracle@rob01db01 Disk1]$ ./runInstaller
Now fill out the installation screens as shown below:
Step 1 of 5 Installation option: choose Oracle GoldenGate for Oracle Database 12c (799.0 MB) Step 2 of 5 Installation Details: Software Location: /u01/app/oracle/product/12.2.0.1/gg_home1 uncheck the box "Start Manager"(we will configure that later Step 3 of 5 Summary click on Install
The installation should be ready within 15 minutes or so.
Step 1.4 Configure the Golden Gate software environment
Add the following variables to the .bash_profile of the oracle user
[oracle@rob01db01 ~]$ vi .bash_profile export GG_HOME=/u01/app/oracle/product/12.2.0.1/gghome_1 export LD_LIBRARY_PATH$ORACLE_HOME/lib:/u01/app/oracle/product/12.2.0.1/gg_home1:/lib:/usr/lib
Create the required Golden Gate directories:
[oracle@rob01db01 ~]$ . ./.bash_profile [oracle@rob01db01 ~]$ cd $GG_HOME [oracle@rob01db01 gg_home1]$ ./ggsci Oracle GoldenGate Command Interpreter for Oracle Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO Linux, x64, 64bit (optimized), Oracle 12c on Dec 12 2015 02:56:48 Operating system character set identified as UTF-8. Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved. GGSCI (rob01db01.localdomain) 1> create subdirs Creating subdirectories under current directory /u01/app/oracle/product/12.2.0.1/gg_home1 Parameter files /u01/app/oracle/product/12.2.0.1/gg_home1/dirprm: created Report files /u01/app/oracle/product/12.2.0.1/gg_home1/dirrpt: created Checkpoint files /u01/app/oracle/product/12.2.0.1/gg_home1/dirchk: created Process status files /u01/app/oracle/product/12.2.0.1/gg_home1/dirpcs: created SQL script files /u01/app/oracle/product/12.2.0.1/gg_home1/dirsql: created Database definitions files /u01/app/oracle/product/12.2.0.1/gg_home1/dirdef: created Extract data files /u01/app/oracle/product/12.2.0.1/gg_home1/dirdat: created Temporary files /u01/app/oracle/product/12.2.0.1/gg_home1/dirtmp: created Credential store files /u01/app/oracle/product/12.2.0.1/gg_home1/dircrd: created Masterkey wallet files /u01/app/oracle/product/12.2.0.1/gg_home1/dirwlt: created Dump files /u01/app/oracle/product/12.2.0.1/gg_home1/dirdmp: created
Repeat this step for the other database server (rob01db02 in this recipe)
Step 2 Configure Golden Gate for the source database
Step 2.1 Create the sample schema for replication
SQL> create tablespace hr_data datafile size 100M autoextend on; SQL> @?/demo/schema/human_resources/hr_main.sql specify password for HR as parameter 1: Enter value for 1: hr specify default tablespeace for HR as parameter 2: Enter value for 2: hr_data specify temporary tablespace for HR as parameter 3: Enter value for 3: temp specify password for SYS as parameter 4: Enter value for 4: oracle specify log path as parameter 5: /home/oracle/log
We will now create an extra table inside the HR schema which will have a new row inserted every minute:
grant dba to hr; create table hr.test_table (id NUMBER GENERATED BY DEFAULT AS IDENTITY, event_date TIMESTAMP); BEGIN DBMS_SCHEDULER.CREATE_JOB ( job_name => 'hr.fill_table', job_type => 'PLSQL_BLOCK', job_action => 'BEGIN insert into hr.test_table(event_date) values (systimestamp); commit; END;', repeat_interval => 'freq=MINUTELY;interval=1', enabled => TRUE, comments => 'fill test table with records'); END; /
Let's see what objects we now have in our test schema:
SQL> select object_type, count(object_name) from dba_objects where owner = 'HR' group by object_type order by object_type; OBJECT_TYPE COUNT(OBJECT_NAME) ----------------------- ------------------ INDEX 19 JOB 1 PROCEDURE 2 SEQUENCE 4 TABLE 8 TRIGGER 2 VIEW 1 7 rows selected.
Step 2.2 Examine the schema for replication
Oracle provides a few scripts that query the source database to see any potentials issues with the Golden Gate replication. While this is not really required in this simple demo environment, it is very important to run this in real life scenario's, where you might encouter all sorts of challenges. For example non supported databupes (ROWDID, AQ tables), tables without Primary or Unique Keys, etc. See the Oracle Notes below:
Classical & Integrated Extract
- Oracle GoldenGate database Schema Profile check script for Oracle DB (Doc ID 1296168.1). Script
- Oracle GoldenGate database Complete Database Profile check script for Oracle DB (All Schemas) (Doc ID 1298562.1)
Integrated Extract only
- GoldenGate Integrated Capture and Integrated Replicat Health check. Script (Doc ID 1448324.1) Now for each Oracle Database version a specific health check script exists
You can download the scripts from Oracle Support and run them as shown below. Or you can skip this step altogether as it is not really required for this demo environment.
SQL> @ogg_full-DB_CheckOracle_07082015.sql REM spools to AllSchemaCheckOracle.out SQL> @ogg_full-schemaCheckOracle_06112015.sql REM spools to schemaCheckOracle.&&schema_name.out SQL> set linesize 900 SQL> set LONG 990000 SQL> set longchunksize 9000 SQL> set numwidth 15 SQL> set heading off SQL> set feedback off SQL> set trimspool on SQL> set appinfo on SQL> set truncate off SQL> spool ogg12c_integrated.html SQL> @ogg_12102.sql
You can examine the output of the check scripts.
Step 2.3 Enable Golden Gate and create the Golden Gate user
SQL> alter system set enable_goldengate_replication = true scope = both; SQL> create tablespace gg_data datafile size 10m autoextend on next 10m maxsize unlimited; SQL> create user ggadmin identified by ggadmin default tablespace gg_data temporary tablespace temp; SQL> grant dba to ggadmin; SQL> exec DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('GGADMIN'); PL/SQL procedure successfully completed.
Remark: the grant above provides user ggadmin the rights to make RMAN Golden Gate aware. GGADMIN will inform RMAN about the status of the archives, so RMAN knows which archived redo logs are still needed by Golden Gate. RMAN won't delete those files at that time.
Step 2.4 Enable archive log mode and force logging
Since Golden Gate uses the online redo log files to replicate data across, it is important that all changes are logged in the online redo log files, and thereforce force logging is required. Perform the following:
SQL> select log_mode from v$database; LOG_MODE ------------ ARCHIVELOG SQL> select force_logging from v$database; FORCE_LOGGING --------------------------------------- NO SQL> alter database force logging; Database altered. SQL> select force_logging from v$database; FORCE_LOGGING --------------------------------------- YES
Note: nologging operations are now being overriden by the force logging setting. This could mean however, that the online redo log consumption will grow, if the no logging operations took place in the database previously.
Step 2.5 Enable supplemental logging
Normally, the data stored in the online redo logfiles are the ROWID of the row being changed and the new values for the column(s) that are changed. Because the ROWID in the source and target database are different, the ROWID cannot be used to uniquely identify the row. Therefore, supplemental logging is required. When supplemental logging is enabled, Oracle will record one of the following values within the online redo logs along with any column that has been changed:
- The PMK of the table
- A Unique key of the table
- All columns of the table (with some exceptions like excluding LOBs).
SQL> SELECT supplemental_log_data_min FROM v$database; SUPPLEMENTAL_LOG_DATA_MIN ------------------------- NO SQL> alter database add supplemental log data; Database altered. SQL> SELECT supplemental_log_data_min FROM v$database; SUPPLEMENTAL_LOG_DATA_MIN ------------------------- YES
Next to the Supplemental Logging at the Database level we also have to configure Supplemental Logging at the Schema (or Table) level. We will use the “add schematrandata” to add supplemental level at the schema level. Add schematrandata (as opposed to add trandata) ensures supplemental logging is automatically enabled for altered for any new tables. Also, when using addschematrandata, you don't have to specify every individual table as you would with add trandata. And while this is not an issue in our simple demo environment, you will appreciate it if you have thousands of tables in the database that need to be replicated.
So, let’s run this functionality for the HR schema
[oracle@rob01db01 datafile]$ cd $GG_HOME [oracle@rob01db01 gg_home1]$ ./ggsci Oracle GoldenGate Command Interpreter for Oracle Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO Linux, x64, 64bit (optimized), Oracle 12c on Dec 12 2015 02:56:48 Operating system character set identified as UTF-8. Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved. GGSCI (rob01db01.localdomain) 1> dblogin userid ggadmin, password ggadmin Successfully logged into database. GGSCI (rob01db01.localdomain as ggadmin@prd12c) 2> add schematrandata hr 2017-07-01 02:23:58 INFO OGG-01788 SCHEMATRANDATA has been added on schema hr. 2017-07-01 02:23:59 INFO OGG-01976 SCHEMATRANDATA for scheduling columns has been added on schema hr.
Now, let's see what has happened:
GGSCI (rob01db01.localdomain as ggadmin@prd12c) 2> info schematrandata hr 2017-07-01 02:25:52 INFO OGG-06480 Schema level supplemental logging, excluding non-validated keys, is enabled on schema HR. 2017-07-01 02:25:52 INFO OGG-01980 Schema level supplemental logging is enabled on schema HR for all scheduling columns. 2017-07-01 02:25:52 INFO OGG-10462 Schema HR have 8 prepared tables for instantiation.
And if we look into the database with SQL:
SQL> select table_name from dba_tables where owner = 'HR'; TABLE_NAME -------------------------------------------------------------------------------- REGIONS COUNTRIES LOCATIONS DEPARTMENTS JOBS EMPLOYEES JOB_HISTORY TEST_TABLE 8 rows selected. -- view the hr.regions table definition SQL> desc hr.regions Name Null? Type ----------------------------------------- -------- ---------------------------- REGION_ID NOT NULL NUMBER REGION_NAME VARCHAR2(25) -- view which columns are added as supplemental logging SQL> select TABLE_NAME, COLUMN_NAME from table(LOGMNR$ALWAYS_SUPLOG_COLUMNS('HR','REGIONS')); TABLE_NAME COLUMN_NAME ------------------------------ ------------------------------ REGIONS REGION_ID -- as we can see, only the REGION_ID has been added as supplemental logging, this is the primary key of the table SQL> desc hr.test_table Name Null? Type ----------------------------------------- -------- ---------------------------- ID NOT NULL NUMBER EVENT_DATE TIMESTAMP(6) SQL> select TABLE_NAME, COLUMN_NAME from table(LOGMNR$ALWAYS_SUPLOG_COLUMNS('HR','TEST_TABLE')); TABLE_NAME COLUMN_NAME ------------------------------ ------------------------------ TEST_TABLE ID TEST_TABLE EVENT_DATE -- as we can see all table columns have been added to the supplemental logging because the table has no Primary or Unique Key
There is much more to be said about tables without a Primary Key or Unique Key, and I wrote a seperate post about the subject on my website. For now, this will do.
Step 2.6 Enable support for sequences
Reference: My Oracle Support Note “Best Practices To Handle Sequences In Golden Gate. (Doc ID 1470281.1)”
[oracle@rob01db01 gg_home1]$ cd $GG_HOME [oracle@rob01db01 gg_home1]$ echo GGADMIN > GLOBALS [oracle@rob01db01 gg_home1]$ sqlplus / as sysdba @sequence.sql SQL*Plus: Release 12.2.0.1.0 Production on Sat Jul 1 02:33:57 2017 Copyright (c) 1982, 2016, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing opt ions Please enter the name of a schema for the GoldenGate database objects: GGADMIN Setting schema name to GGADMIN UPDATE_SEQUENCE STATUS: Line/pos Error ---------- ----------------------------------------------------------------- No errors No errors GETSEQFLUSH Line/pos Error ---------- ----------------------------------------------------------------- No errors No errors SEQTRACE Line/pos Error ---------- ----------------------------------------------------------------- No errors No errors REPLICATE_SEQUENCE STATUS: Line/pos Error ---------- ----------------------------------------------------------------- No errors No errors STATUS OF SEQUENCE SUPPORT -------------------------------------------------------------- SUCCESSFUL installation of Oracle Sequence Replication support
The sequence.sql script creates the following procedures in the GG01 schema: SEQTRACE, GETSEQFLUSH, REPLICATESEQUENCE, UPDATESEQUENCE
You can specify the sequences in the extract parameter file (see later on). However, it is not actually necessary to specify sequence names explicitly if all sequences are to be replicated. In the above example, the mappings for the tables and sequences can be combined.
Step 2.7 Configure SQL*Net connectivity
You need to configure SQL*Net connectivity on the source database server to both the source and the target database. Add entries as shown below to the tnsnames.ora file:
[oracle@rob01db01 ~]$ vi $ORACLE_HOME/network/admin/tnsnames.ora -- Add the following: PRD12C = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rob01db01.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = prd12c.localdomain) ) ) trg12C = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rob01db02.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = trg12c;localdomain) ) )
Now test the connectivity as follows:
-- Connection from source node to source database: [oracle@rob01db01 ~]$ sqlplus ggadmin/ggadmin@prd12c SQL> select name from v$database union select host_name from v$instance; NAME ---------------------------------------------------------------- PRD12C rob01db01.localdomain -- Connection from source node to target node: [oracle@rob01db01 ~]$ sqlplus ggadmin/ggadmin@tgr12c SQL> select name from v$database union select host_name from v$instance; NAME ---------------------------------------------------------------- TRG12C rob01db02.localdomain
Step 2.8 Create Database Golden Gate database service
This step is only required if you are running a RAC database on either source or target. Although you are connecting with a RAC database, the Golden Gate processes run on a single node at a time. You need to configure a new service for Golden Gate that runs on a single node but provide HA capability to the other compute node.
In the example below a service is created for the source database:
[oracle@rob01db01 ~]$ srvctl add service -db prd12c -s prd12c_gg -r prd12c1 -a prd12c12
Also, you will need to create and use the following alias to the tnsnames.ora on both source and target:
prd12c_gg = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rob01-scan.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = prd12c_gg) ) )
You can read more about Golden Gate and high availability in another post on this website.
Step 2.9 Configure UNDO RETENTION
- By default, GoldenGate 12c uses Flashback Query to obtain all the values from the undo (rollback) tablespaces for special objects (user-defined types, nested tables, LOBs and XMLType objects).
- Golden Gate uses flashback query to create before images of some data types (like LOBs, user defined tables, nested tables and XMLtype objects). We need to grant the “flashback any table” system privilege to GGADMIN. But since we already granted the DBA role to GGADMIN, that will not be necessary.
- But we have also to adjust the UNDO_RETENTION parameter to a much higher value. This is the GG recommended setting to accommodate long running transactions.
SQL> show parameter undo_retention NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ undo_retention integer 900 SQL> alter system set undo_retention=86400; System altered.
To calculate the space that is required in the undo tablespace use the following formula and view V$UNDOSTAT:
undo_space = UNDO_RETENTION * UPS + overhead
Where:
- undo_space is the number of undo blocks.
- UNDO_RETENTION is the value of the UNDO_RETENTION parameter (in seconds).
- UPS is the number of undo blocks for each second.
- overhead is the minimal overhead for metadata (transaction tables, etc.).
Step 2.10 Install the UTL_SPADV package
The Oracle Streams Performance Advisor (SPADV) enables monitoring of the integrated GoldenGate server processes which are used by integrated Extract and integrated Replicat, and provides information about how these processes are performing. SPADV statistics are collected and analyzed using the UTL_SPADV package.
SQL> @$ORACLE_HOME/rdbms/admin/utlspadv.sql
Refer to the document Oracle GoldenGate Performance Best Practices for more tuning information about this utility.
Step 2.11 Create a credential store for the Golden Gate user
In setting up Oracle GoldenGate the capture (extract) and apply (replicat) parameter files need to be configured to log in to the database which they will perform operations. In order to do this the Oracle GoldenGate User name and password need to be provided in the parameter files.
To make this process login information more secure, we can create a userid alias that the extract or replicat process can use to log into the database. In order to create a login alias, a credential store needs to be create. Below are the steps to create the credential store and associated aliases.
After logging into the GoldenGate Service Command Interface (GGSCI), a credential store needs to be created. By default the credential store will be kept in the “dircrd” directory under the $OGG_HOME.
[oracle@rob01db01 gghome_1]$ cd $GG_HOME [oracle@rob01db01 gghome_1]$ ./ggsci GGSCI (rob01db01.localdomain) 1> add credentialstore Credential store created in ./dircrd/. GGSCI (rob01db01.localdomain) 2> alter credentialstore add user ggadmin, password ggadmin alias alias_ggadmin Credential store in ./dircrd/ altered.
The next time you need to logon to the database you can use this alias instead of a userid and password:
GGSCI (rob01db01.localdomain) 3> dblogin useridalias alias_ggadmin Successfully logged into database.
Step 2.12 Configure the Streams Pool
When using integrated Replicat, the Streams pool must be configured. If using non-integrated Replicat, the Streams
pool is not necessary. The size requirement of the Streams pool for integrated Replicat is based on a single Golden Gate parameter, MAX_SGA_SIZE. This Golden Gate parameter is not to be confused with the Oracle Database Parameter SG_MAX_SIZE.
The MAX_SGA_SIZE parameter defaults to INFINITE which allows the Replicat process to use as much of the Streams pool as possible. Oracle does not recommend setting the MAX_SGA_SIZE parameter for the Replicat process (unlike the Extract process, where we explicitly set this parameter. Instead, set the STREAMS_POOL_SIZE initialization parameter using the following calculation:
(MAX_SGA_SIZE (defaults to 1G) * number of integrated Replicat processes) + 25% head room
For example, on a system with one integrated Replicat process, the calculation would be as follows:
(1GB * 1) * 1.25 = 1.25GB => STREAMS_POOL_SIZE = 1280M
SQL> alter system set streams_pool_size=1280M scope=both;
Refer to the document Oracle GoldenGate Performance Best Practices for more tuning information and about using the Oracle Streams Performance Analyzer.
Step 2.13 Create a backup of the database
RMAN> run { backup as compressed backupset database format '/u02/backup/prd12c/_%d_%U' keep until time 'sysdate+365' restore point 'GG_COPY'; }
Step 3 Configure Golden Gate for the target database
Step 3.1 Enable Golden Gate and create the Golden Gate user
SQL> alter system set enable_goldengate_replication = true scope = both; SQL> create tablespace gg_data datafile size 10m autoextend on next 10m maxsize unlimited; SQL> create user ggadmin identified by ggadmin default tablespace gg_data temporary tablespace temp; SQL> grant dba to ggadmin; SQL> exec DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('GGADMIN'); PL/SQL procedure successfully completed.
Step 3.2 Enable archive log mode and force logging
This step is not really required for the target database. However, as part of Oracle Best Practises and because we will want to create online backups later on,, let's configure it.
But take not that it is not required to activate supplemental logging at the target database.
SQL> select log_mode from v$database; LOG_MODE ------------ ARCHIVELOG SQL> select force_logging from v$database; FORCE_LOGGING --------------------------------------- NO SQL> alter database force logging; Database altered. SQL> select force_logging from v$database; FORCE_LOGGING --------------------------------------- YES
Step 3.3 Enable support for sequences
Reference: My Oracle Support Note “Best Practices To Handle Sequences In Golden Gate. (Doc ID 1470281.1)”
[oracle@rob01db01 gg_home1]$ cd $GG_HOME [oracle@rob01db01 gg_home1]$ echo GGADMIN > GLOBALS [oracle@rob01db01 gg_home1]$ sqlplus / as sysdba @sequence.sql SQL*Plus: Release 12.2.0.1.0 Production on Sat Jul 1 02:33:57 2017 Copyright (c) 1982, 2016, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options Please enter the name of a schema for the GoldenGate database objects: GGADMIN Setting schema name to GGADMIN UPDATE_SEQUENCE STATUS: Line/pos Error ---------- ----------------------------------------------------------------- No errors No errors GETSEQFLUSH Line/pos Error ---------- ----------------------------------------------------------------- No errors No errors SEQTRACE Line/pos Error ---------- ----------------------------------------------------------------- No errors No errors REPLICATE_SEQUENCE STATUS: Line/pos Error ---------- ----------------------------------------------------------------- No errors No errors STATUS OF SEQUENCE SUPPORT -------------------------------------------------------------- SUCCESSFUL installation of Oracle Sequence Replication support
Step 3.4 Configure SQL*Net connectivity
You need to configure SQL*Net connectivity on the source database server to both the source and the target database. Add entries as shown below to the tnsnames.ora file:
[oracle@rob01db01 ~]$ vi $ORACLE_HOME/network/admin/tnsnames.ora -- Add the following: PRD12C = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rob01db01.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = prd12c.localdomain) ) ) trg12C = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rob01db02.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = trg12c;localdomain) ) )
Now test the connectivity as follows:
-- Connection from source node to source database: [oracle@rob01db01 ~]$ sqlplus ggadmin/ggadmin@prd12c SQL> select name from v$database union select host_name from v$instance; NAME ---------------------------------------------------------------- PRD12C rob01db01.localdomain -- Connection from source node to target node: [oracle@rob01db01 ~]$ sqlplus ggadmin/ggadmin@tgr12c SQL> select name from v$database union select host_name from v$instance; NAME ---------------------------------------------------------------- TRG12C rob01db02.localdomain
Step 3.5 Create Database Golden Gate database service
This step is only required if you are running a RAC database on either source or target. Although you are connecting with a RAC database, the Golden Gate processes run on a single node at a time. You need to configure a new service for Golden Gate that runs on a single node but provide HA capability to the other compute node.
In the example below a service is created for the source database:
[oracle@rob01db01 ~]$ srvctl add service -db trg12c -s trg12c_gg -r trg12c1 -a trg12c12
Also, you will need to create and use the following alias to the tnsnames.ora on both source and target:
trg12c_gg = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rob01-scan.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = trg12c_gg) ) )
Step 3.6 Install the UTL_SPADV package
The Oracle Streams Performance Advisor (SPADV) enables monitoring of the integrated GoldenGate server processes which are used by integrated Extract and integrated Replicat, and provides information about how these processes are performing. SPADV statistics are collected and analyzed using the UTL_SPADV package.
SQL> @$ORACLE_HOME/rdbms/admin/utlspadv.sql
Refer to the document Oracle GoldenGate Performance Best Practices for more tuning information about this utility.
Step 3.7 Create a credential store for the Golden Gate user
[oracle@rob01db02 gghome_1]$ cd $GG_HOME [oracle@rob01db02 gghome_1]$ ./ggsci GGSCI (rob01db01.localdomain) 1> add credentialstore Credential store created in ./dircrd/. GGSCI (rob01db01.localdomain) 2> alter credentialstore add user ggadmin, password ggadmin alias alias_ggadmin Credential store in ./dircrd/ altered.
Step 3.8 Configure the Streams Pool
SQL> alter system set streams_pool_size=1280M scope=both;
Step 3.9 Create a backup of the database
Just because I am a paranoid DBA, let's make a backup of the current situation:
RMAN> run { backup as compressed backupset database format '/u02/backup/trg12c/_%d_%U' keep until time 'sysdate+365' restore point 'GG_COPY'; }
Step 4 On the source database: configure the management, extract and pump process
Step 4.1 On the source: configure and start the manager process
4.1.1. Explanation
The Golden Gate Manager process is responsible for managing all the Golden Gate processes and operations within a given Golden Gate instance, including the Extract, Data Pump and Replicat process. It provides the following functionality:
- Starting Logger, Extract, Replicat and Syncfile processes.
- Monitoring and reporting the status of Oracle GoldenGate processing
- Starting the dynamic Collector process on the target
- Automatically restarting critical processes
- Threshold reporting, such as when Extract falls behind the TMF-audit trail
- Managing resources for the TMF audit trail, such as maintaining copies of audit trails on backup volumes
- Purging trails when Extract and Replicat has finished with them
- Pre-allocating log trail space for Logger processing
Every Golden Gate instance has only 1 Manager process running. This means that we will have one Manager process on the compute node, which will direct all the different extract and replicat processes. Note that if you have more than 1 Golden Gate instances up and running on the same database server (as may be the case when you host the database on a consolidated platform such as Exadata), you may require more physical Golden Gate installations. This is explained in another post on this website.
4.1.2 Configure the manager process
[oracle@rob01db01 gghome_1]$ cd $GG_HOME [oracle@rob01db01 gghome_1]$ ./ggsci GGSCI (rob01db01.localdomain as ggadmin@prd12c) 3> edit params mgr -- Manager process file PORT 7812 DYNAMICPORTLIST 7810-7815 -- Five ports is an advised minimum. -- By naming it, you have better control and fix the numbers for the firewall security. -- Set it to a higher number when for example change management process is long. PURGEOLDEXTRACTS ./dirdat/*, USECHECKPOINTS, MINKEEPDAYS 2 -- AUTOSTART ER * -- AUTORESTART extract *, retries 10, WAITMINUTES 5, RESETMINUTES 60 -- Manager process file PORT 7812 PURGEOLDEXTRACTS ./dirdat/*, USECHECKPOINTS, MINKEEPDAYS 2 -- AUTOSTART ER * -- AUTORESTART extract *, retries 10, WAITMINUTES 5, RESETMINUTES 60
4.1.3 Start the manager process
GGSCI (rob01db01.localdomain) 6> start mgr Manager started. GGSCI (rob01db01.localdomain) 9> info mgr Manager is running (IP port rob01db01.localdomain.7812, Process ID 14811). MANAGER RUNNING
Note: with the statement view report mgr you get a detailed report of the manager process
Step 4.2 On the target: configure and start the manager process
Perform exactly the same steps as above to configure and start the manager process on the target. You should also have the manager process running on the target after that:
GGSCI (rob01db02.localdomain) 3> info mgr Manager is running (IP port rob01db02.localdomain.7812, Process ID 11025).
Note that you can configure the manager process as a high available service which starts automatically upon reboot. But I discuss that in another post.
Step 4.3 On the source: configure and start the extract process
4.3.1 Background Information
Oracle recommends to use the integrated extract and replicat functionality for reasons of performance and ease of Management. (Reference: Oracle GoldenGate Performance Best Practices, Oracle White Paper, November 2014).
In integrated capture mode, the GoldenGate Extract process no longer reads the online redo log files (as is the case with the classical capture mode), but instead, the GoldenGate Extract process interacts directly with a database logmining server to receive data changes in the form of logical change records (LCR).
4.3.2 On the source: Configure the extract process
First we will create a subdirectory for the extract process. The extract process will read the online redo log files and write trail files containing all the committed changes to this directory:
[oracle@rob01db01 gg_home1]$ mkdir -p /u01/app/oracle/product/12.2.0.1/gg_home1/dirdat/prd12c
Now we can create the extract configuration file:
[oracle@rob01db01 gg_home1]$ ./ggsci GGSCI (rob01db01.localdomain as ggadmin@prd12c) 6> edit param xprd12c extract xprd12c USERIDALIAS alias_ggadmin -- set the environment SETENV (ORACLE_SID='prd12c') SETENV (NLS_LANG=AMERICAN_AMERICA.AL32UTF8) discardfile ./dirrpt/prd12c.dsc, purge exttrail ./dirdat/prd12c/ex -- include DDL DDL INCLUDE MAPPED DDLOPTIONS REPORT LOGALLSUPCOLS UPDATERECORDFORMAT COMPACT TRANLOGOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 1024) -- replication objects -- This is how to exclude a particular table from a schema -- tableexclude hr.tablename -- this rule must be before the "table" rule. table hr.*; sequence hr.*; -- list how many records are processed every half an hour reportcount every 30 minutes, rate
4.3.3 Start the extract
GGSCI (rob01db01.localdomain) 2> dblogin useridalias alias_ggadmin Successfully logged into database. GGSCI (rob01db01.localdomain as ggadmin@prd12c) 3> register extract xprd12c database 2017-07-01 04:29:09 INFO OGG-02003 Extract XPRD12C successfully registered with database at SCN 351205. GGSCI (rob01db01.localdomain as ggadmin@prd12c) 4> add extract xprd12c, integrated tranlog, begin now EXTRACT (Integrated) added. GGSCI (rob01db01.localdomain as ggadmin@prd12c) 5> ADD EXTTRAIL ./dirdat/prd12c/ex, EXTRACT XPRD12C megabytes 10 EXTTRAIL added. GGSCI (rob01db01.localdomain as ggadmin@prd12c) 6> start extract xprd12c Sending START request to MANAGER ... EXTRACT XPRD12C starting
Note: a known error with this is that the extract will shutdown again (status ABENDED), and when you issue "view report xprd12c" you will see the following error:
2017-07-01 04:33:51 ERROR OGG-01044 The trail './dirdat/prd12c/hr' is not assigned to extract 'XPRD12C'. Assign the trail to the extract with the command "ADD EXTTRAIL/RMTTRAIL ./dirdat/prd12c/hr, EXTRACT XPRD12C".
You just have to add the trailfile again to the extract and start the extract again:
GGSCI (rob01db01.localdomain as ggadmin@prd12c) 8> ADD EXTTRAIL ./dirdat/prd12c/hr, EXTRACT XPRD12C EXTTRAIL added. GGSCI (rob01db01.localdomain as ggadmin@prd12c) 9> start extract xprd12c Sending START request to MANAGER ... EXTRACT XPRD12C starting
4.3.4 View the current status
GGSCI (rob01db01.localdomain as ggadmin@prd12c) 10> info extract xprd12c EXTRACT XPRD12C Last Started 2017-07-01 04:34 Status RUNNING Checkpoint Lag 00:00:09 (updated 00:00:03 ago) Process ID 15307 Log Read Checkpoint Oracle Integrated Redo Logs 2017-07-01 04:35:08 SCN 0.367261 (367261) GGSCI (rob01db01.localdomain as ggadmin@prd12c) 12> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING EXTRACT RUNNING XPRD12C 00:01:35 00:00:05
When we navigate to the trail file directory we can see the trail files that are being generated by the extract process. In the next step we will configure the pump process that will copy the local trail file on the source database server to the target database server.
GGSCI (rob01db01.localdomain) 18> info extract xprd12c detail EXTRACT XPRD12C Last Started 2017-07-01 04:39 Status RUNNING Checkpoint Lag 00:00:10 (updated 00:00:03 ago) Process ID 15409 Log Read Checkpoint Oracle Integrated Redo Logs 2017-07-01 04:39:06 SCN 0.367918 (367918) Target Extract Trails: Trail Name Seqno RBA Max MB Trail Type ./dirdat/prd12c/ex 0 1459 10 EXTTRAIL Integrated Extract outbound server first scn: Unavailable.Need DBLOGIN. Integrated Extract outbound server filtering start scn: Unavailable.Need DBLOGIN. Extract Source Begin End Not Available 2017-07-01 04:33 2017-07-01 04:39 Not Available 2017-07-01 04:33 2017-07-01 04:37 Not Available 2017-07-01 04:33 2017-07-01 04:37 Not Available 2017-07-01 04:33 2017-07-01 04:37 Not Available 2017-07-01 04:33 2017-07-01 04:37 Not Available * Initialized * 2017-07-01 04:33 Not Available * Initialized * 2017-07-01 04:33 Not Available * Initialized * 2017-07-01 04:33 Not Available * Initialized * 2017-07-01 04:33 Current directory /u01/app/oracle/product/12.2.0.1/gg_home1 Report file /u01/app/oracle/product/12.2.0.1/gg_home1/dirrpt/XPRD12C.rpt Parameter file /u01/app/oracle/product/12.2.0.1/gg_home1/dirprm/xprd12c.prm Checkpoint file /u01/app/oracle/product/12.2.0.1/gg_home1/dirchk/XPRD12C.cpe Process file /u01/app/oracle/product/12.2.0.1/gg_home1/dirpcs/XPRD12C.pce Error log /u01/app/oracle/product/12.2.0.1/gg_home1/ggserr.log
GGSCI (rob01db01.localdomain as ggadmin@prd12c) 11> exit [oracle@rob01db01 gg_home1]$ cd dirdat/prd12c [oracle@rob01db01 prd12c]$ ls -ltr total 4 -rw-r----- 1 oracle oinstall 1459 Jul 1 04:34 hr000000000
Step 4.4 On the source: configure and start the pump process
4.4.1 Background information
The Data pump process is an optional process but Oracle Best Practices recommend this setup . The Pump process reads the local trail files and ship them over to the Collector process on the target server, which writes the remote trail file. The Pump process is basically just like the Extract process, but instead of reading the online redo log files the Pump process reads the local trail files.
To increase performance of the Data Pump when table names and table structures are not altered or data is being filtered, use the PASSTHRU parameter in the Data Pump parameter file. This prevents the Data Pump from looking up table definitions from either the database or from a data definitions file. The PASSTHRU parameter is table specific and can be used with a wildcard to apply to multiple tables. If there are tables that require mapping or data conversions, the NOPASSTHRU parameter should be used. Tables listed with the NOPASSTHRU parameter must be specified after the PASSTHRU parameter. Doing this increases Data Pump performance and reduces CPU usage.
4.4.2 On the source: Configure the pump configuration file
First we have to create a directory on the target that will host all the remote trailfiles:
[oracle@rob01db02 gghome_1]$ cd /u01/app/oracle/product/12.2.0.1/gghome_1/dirdat/trg12c
Now configure the following on the source:
[oracle@rob01db01 gg_home1]$ cd $GG_HOME [oracle@rob01db01 gg_home1]$ ./ggsci GGSCI (rob01db01.localdomain) 1> edit param pprd12c EXTRACT pprd12c -- PASSTRHU indicates that the data does not need to be filtered or transformed PASSTHRU -- RMTHOST points to the remote system RMTHOST rob01db02, MGRPORT 7812, TCPBUFSIZE 1048576, TCPFLUSHBYTES 1048576 -- RMTTRAIL is the remote trail on the target database server RMTTRAIL dirdat/trg12c/pu DDL INCLUDE MAPPED -- objects that are replicated to the target system table hr.*; sequence hr.*; REPORTCOUNT EVERY 15 MINUTES, RATE
A remark about some of the Parameters. Refer to the document Oracle GoldenGate Performance Best Practices for more information:
To increase performance of the Data Pump when table names and table structures are not altered or data is being filtered, use the PASSTHRU parameter in the Data Pump parameter file. This prevents the Data Pump from looking up table definitions from either the database or from a data definitions file. The PASSTHRU parameter is table specific and can be used with a wildcard to apply to multiple tables. If there are tables that require mapping or data conversions, the NOPASSTHRU parameter should be used. Tables listed with the NOPASSTHRU parameter must be specified after the PASSTHRU parameter. Doing this increases Data Pump performance and reduces CPU usage.
The two RMTHOST parameters, TCPBUFSIZE and TCPFLUSHBYTES, are very useful for increasing the buffer sizes and network packets sent by Data Pump over the network from the source to the target system. This is especially beneficial for high latency networks. These parameters should be set to a value of 1MB (1,048,576 bytes) or the calculated value, whichever is larger
4.4.3 register and start the pump process
[oracle@rob01db01 gg_home1]$ ./ggsci GGSCI (rob01db01.localdomain) 6> dblogin useridalias alias_ggadmin Successfully logged into database. GGSCI (rob01db02.localdomain as ggadmin@trg12c) 11> register extract pprd12c database; 2017-07-01 05:15:05 INFO OGG-02003 Extract PPRD12C successfully registered with database at SCN 372364. GGSCI (rob01db01.localdomain as ggadmin@prd12c) 7> add extract pprd12c, EXTTRAILSOURCE dirdat/prd12c/ex EXTRACT added. GGSCI (rob01db01.localdomain as ggadmin@prd12c) 8> add rmttrail dirdat/trg12c/pu, extract pprd12c, megabytes 10 RMTTRAIL added. GGSCI (rob01db01.localdomain as ggadmin@prd12c) 6> start extract pprd12c Sending START request to MANAGER ... EXTRACT PPRD12C starting
4.4.4 monitor the pump process
GGSCI (rob01db01.localdomain as ggadmin@prd12c) 7> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING EXTRACT RUNNING PPRD12C 00:00:00 00:00:03 EXTRACT RUNNING XPRD12C 00:00:05 00:00:10 GGSCI (rob01db01.localdomain as ggadmin@prd12c) 8> info extract pprd12c EXTRACT PPRD12C Last Started 2017-07-01 05:16 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:00 ago) Process ID 15940 Log Read Checkpoint File dirdat/prd12c/ex000000009 2017-07-01 05:07:54.000000 RBA 8844734
If we now logon to the target database server rob01db02 we should see that the remote trail files are being written there in the remote trail file location:
[oracle@rob01db02 trg12c]$ cd /u01/app/oracle/product/12.2.0.1/gghome_1/dirdat/trg12c [oracle@rob01db02 trg12c]$ ls -ltr total 96552 -rw-r----- 1 oracle oinstall 9999952 Jul 1 05:14 pu000000000 -rw-r----- 1 oracle oinstall 9999759 Jul 1 05:14 pu000000001 -rw-r----- 1 oracle oinstall 9999990 Jul 1 05:14 pu000000002 -rw-r----- 1 oracle oinstall 9999831 Jul 1 05:14 pu000000003 -rw-r----- 1 oracle oinstall 9999842 Jul 1 05:14 pu000000004 -rw-r----- 1 oracle oinstall 9999716 Jul 1 05:14 pu000000005 -rw-r----- 1 oracle oinstall 9999737 Jul 1 05:14 pu000000006 -rw-r----- 1 oracle oinstall 9999737 Jul 1 05:14 pu000000007 -rw-r----- 1 oracle oinstall 9999793 Jul 1 05:14 pu000000008 -rw-r----- 1 oracle oinstall 8847224 Jul 1 05:14 pu000000009
So, we have all the processes configured. What we will do next is to instantiate the target database with an initial copy of the source database. After that, we will start the replicat process to replicate any new changes on the source to the target.
Step 4.5 Instantiate the target database
Step 5.1 Background information =
There are several ways to perform the target instantiation. Please refer to document Oracle GoldenGate Best Practices: Instantiation from an Oracle Source Database (Doc ID 1276058.1)
We will use Oracle Datapump for the target instantiation. The export could be performed directly on the production system by using the export with the FLASHBACK_SCN option. Then the FLASHBACK_SCN used for the export would then be the CSN value used for the Replicat. Note that you have to take the entire export using the same value for FLASHBACK_SCN for your entire export, even if you use multiple export files (e.g. you run multiple sessions in parallel, or in the case of Data Pump Export, you use Oracle's parallelism).
Step 5.2 On the source database: check for open transactions
SQL> SELECT s.sid, s.serial# FROM v$session s WHERE s.saddr in (SELECT t.ses_addr FROM V$transaction t, dba_rollback_segs r WHERE t.xidusn=r.segment_id); no rows selected
Step 5.3 On the source database: register the current SCN
SQL> col current_scn format 999999999999 SQL> select current_scn from v$database; CURRENT_SCN ------------- 386164
Step 5.4 On the source database: create a directory for the datapump export files
SQL> !mkdir /home/oracle/prd12c_gg SQL> create directory gg_dir as '/home/oracle/prd12c_gg'; Directory created.
Step 5.5 On the source database server: perform the datapump export
Note that in real life you may want to optimize the performance of your export and import operation. You can read about that in other posts on this website. However, we do not have to worry about that in our simple demo environment:
[oracle@rob01db01 ~]$ expdp ggadmin/ggadmin directory=gg_dir parallel=4 dumpfile=gg_prd12c_%U.dmp \ flashback_scn=386164 schemas=HR Export: Release 12.1.0.2.0 - Production on Sat Jul 1 05:23:56 2017 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options FLASHBACK automatically enabled to preserve database integrity. Starting "GGADMIN"."SYS_EXPORT_SCHEMA_01": ggadmin/******** directory=gg_dir parallel=4 dumpfile=gg_prd12c_%U.dmp flashback_scn=386164 schemas=HR Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 38.5 MB Processing object type SCHEMA_EXPORT/USER Processing object type SCHEMA_EXPORT/SYSTEM_GRANT Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA . . exported "HR"."COUNTRIES" 6.460 KB 25 rows . . exported "HR"."DEPARTMENTS" 7.125 KB 27 rows . . exported "HR"."EMPLOYEES" 17.09 KB 107 rows . . exported "HR"."JOBS" 7.109 KB 19 rows Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA . . exported "HR"."OBJECTS" 31.97 MB 323584 rows . . exported "HR"."JOB_HISTORY" 7.195 KB 10 rows . . exported "HR"."LOCATIONS" 8.437 KB 23 rows . . exported "HR"."REGIONS" 5.546 KB 4 rows . . exported "HR"."TEST_TABLE" 9.117 KB 190 rows Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE Processing object type SCHEMA_EXPORT/TABLE/PROCACT_INSTANCE Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/COMMENT Processing object type SCHEMA_EXPORT/TABLE/IDENTITY_COLUMN Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE Processing object type SCHEMA_EXPORT/VIEW/VIEW Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/TRIGGER Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Processing object type SCHEMA_EXPORT/STATISTICS/MARKER Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCOBJ Master table "GGADMIN"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded ****************************************************************************** Dump file set for GGADMIN.SYS_EXPORT_SCHEMA_01 is: /home/oracle/prd12c_gg/gg_prd12c_01.dmp /home/oracle/prd12c_gg/gg_prd12c_02.dmp /home/oracle/prd12c_gg/gg_prd12c_03.dmp /home/oracle/prd12c_gg/gg_prd12c_04.dmp Job "GGADMIN"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat Jul 1 05:25:18 2017 elapsed 0 00:01:20
Step 5.6 On the target database server: transfer the dumpfiles from the source
[oracle@rob01db02 ~]$ mkdir $HOME/trg12c_gg [oracle@rob01db02 trg12c_gg]$ scp oracle@rob01db01:/home/oracle/prd12c_gg/gg_prd12c_01.dmp /home/oracle/trg12c_gg/gg_prd12c_01.dmp oracle@rob01db01's password: ***** gg_prd12c_01.dmp 100% 152KB 152.0KB/s 00:00 [oracle@rob01db02 trg12c_gg]$ scp oracle@rob01db01:/home/oracle/prd12c_gg/gg_prd12c_02.dmp /home/oracle/trg12c_gg/gg_prd12c_02.dmp oracle@rob01db01's password: ***** gg_prd12c_02.dmp 100% 32MB 16.0MB/s 00:02 [oracle@rob01db02 trg12c_gg]$ scp oracle@rob01db01:/home/oracle/prd12c_gg/gg_prd12c_03.dmp /home/oracle/trg12c_gg/gg_prd12c_03.dmp oracle@rob01db01's password: ***** gg_prd12c_03.dmp 100% 88KB 88.0KB/s 00:00 [oracle@rob01db02 trg12c_gg]$ scp oracle@rob01db01:/home/oracle/prd12c_gg/gg_prd12c_04.dmp /home/oracle/trg12c_gg/gg_prd12c_04.dmp oracle@rob01db01's password: ***** gg_prd12c_04.dmp 100% 364KB 364.0KB/s 00:00
Step 5.7 On the target: create the directory
SQL> create directory gg_dir as '/home/oracle/trg12c_gg'; Directory created. SQL> alter database flashback on; Database altered. SQL> create restore point just_in_case; Restore point created.
Step 5.8 On the target: perform the import
Perform the import operation on the target:
[oracle@rob01db02 trg12c_gg]$ impdp system/oracle directory=gg_dir parallel=4 dumpfile=gg_prd12c_%U.dmp schemas=HR Import: Release 12.1.0.2.0 - Production on Sat Jul 1 05:26:19 2017 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options Master table "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded Starting "SYSTEM"."SYS_IMPORT_SCHEMA_01": system/******** directory=gg_dir parallel=4 dumpfile=gg_prd12c_%U.dmp schemas=HR Processing object type SCHEMA_EXPORT/USER Processing object type SCHEMA_EXPORT/SYSTEM_GRANT Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE Processing object type SCHEMA_EXPORT/TABLE/PROCACT_INSTANCE Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA . . imported "HR"."COUNTRIES" 6.460 KB 25 rows . . imported "HR"."DEPARTMENTS" 7.125 KB 27 rows . . imported "HR"."EMPLOYEES" 17.09 KB 107 rows . . imported "HR"."JOBS" 7.109 KB 19 rows . . imported "HR"."JOB_HISTORY" 7.195 KB 10 rows . . imported "HR"."LOCATIONS" 8.437 KB 23 rows . . imported "HR"."REGIONS" 5.546 KB 4 rows . . imported "HR"."TEST_TABLE" 9.117 KB 190 rows . . imported "HR"."OBJECTS" 31.97 MB 323584 rows Processing object type SCHEMA_EXPORT/TABLE/COMMENT Processing object type SCHEMA_EXPORT/TABLE/IDENTITY_COLUMN Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE Processing object type SCHEMA_EXPORT/VIEW/VIEW Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/TRIGGER Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Processing object type SCHEMA_EXPORT/STATISTICS/MARKER Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCOBJ Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully completed at Sat Jul 1 05:27:19 2017 elapsed 0 00:00:51
Step 6 On the target: Configure the replication
Step 6.1 Background information
Just like Oracle recommends integrated Extract Oracle also recommends using integrated Replicat, introduced in Oracle GoldenGate Release 12.1 and Oracle Database Release 11.2.0.4. Integrated Replicat leverages the apply process functionality that is available inside the database. Referential integrity and DDL operations are automatically applied in the correct order. This alleviates the database administrator from having to understand how to partition tables between Replicat processes based on foreign key constraints, or from having to ensure that the correct Replicat handles the DDL for tables. Integrated Replicat also offers automatic parallelism which automatically increases or decreases the number of apply processes based on the current workload and database performance. Management and tuning of Replicat performance is simplified since you do not have to manually configure multiple Replicat processes to distribute the tables between them. Integrated Replicat automatically enables the asynchronous commit feature so processing can continue immediately after each COMMIT command is issued.
- The Replicat process will read all the changes in the remote trail files and will apply those changes to the target database.
- The Replicat process maintains checkpoints that provide a known position in the trail file for recovery and restart. By default, this checkpoint information is stored in a checkpoint file for the Replicat process.
- Additionally, checkpoint information can also be stored in a checkpoint table in the target database so that the checkpoint information is included within the Replicat transaction itself. This provides a higher level of protection against inconsistencies between the checkpoint file and the applied transactions.
- Integrated Replicat automatically records checkpoint information within the target database so checkpoint table creation is not necessary.
- Integrated Replicat uses dynamic allocation of apply server processes to distribute the work of applying transactions between the processes. New apply processes are created or removed dynamically to maintain the optimal Replicat throughput. This automation reduces the administrative overhead of manually dividing the work between Replicat processes.
- Oracle recommends to first configure a single Replicat process and monitor apply lag and performance. Adding more Replicat processes should only be carried out when performance of a single Replicat causes unacceptable apply latency.
- Please refer to doc "ORACLE GOLDENGATE PERFORMANCE BEST PRACTICES" for more information.
Step 6.2 On the target: Configure the replicat process
[oracle@rob01db02 trg12c_gg]$ cd $GG_HOME [oracle@rob01db02 gghome_1]$ ./ggsci GGSCI (rob01db02.localdomain) 1> edit param rtrg12c REPLICAT rtrg12c USERIDALIAS alias_ggadmin -- set the environment SETENV (ORACLE_SID='trg12c') SETENV (NLS_LANG=AMERICAN_AMERICA.AL32UTF8) HANDLECOLLISIONS DBOPTIONS SUPPRESSTRIGGERS ASSUMETARGETDEFS DISCARDFILE ./dirrpt/trg12c.dsc, purge DDL INCLUDE MAPPED DDLOPTIONS REPORT DBOPTIONS INTEGRATEDPARAMS(parallelism 4) -- To speed up replication add "batchsql" -- See the GG Reference Guide for more info, options and possible restrictions. -- BATCHSQL -- list how many records are processed every half an hour REPORTCOUNT EVERY 30 MINUTES, RATE MAP hr.*, TARGET hr.*;
Step 6.3 On the target: register and start the replicat process
GGSCI (rob01db02.localdomain) 3> dblogin useridalias alias_ggadmin Successfully logged into database. GGSCI (rob01db02.localdomain as ggadmin@trg12c) 4> register replicat rtrg12c database 2017-07-01 05:34:52 INFO OGG-02528 REPLICAT RTRG12C successfully registered with database as inbound server OGG$RTRG12C. GGSCI (rob01db02.localdomain as ggadmin@trg12c) 5> add replicat rtrg12c, integrated, exttrail ./dirdat/trg12c/pu REPLICAT (Integrated) added. <pre> GGSCI (rob01db02.localdomain as ggadmin@trg12c) 19> start replicat rtrg12c, aftercsn 386164 Sending START request to MANAGER ... REPLICAT RTRG12C starting
Step 6.4 On the target: monitor the replicat process
GGSCI (rob01db02.localdomain as ggadmin@trg12c) 7> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING REPLICAT RUNNING RTRG12C 00:00:00 00:01:33
GGSCI (rob01db02.localdomain as ggadmin@trg12c) 10> info replicat rtrg12c REPLICAT RTRG12C Last Started 2017-07-01 18:51 Status RUNNING INTEGRATED Checkpoint Lag 00:00:00 (updated 00:00:07 ago) Process ID 21206 Log Read Checkpoint File ./dirdat/trg12c/pu000000009 2017-07-01 18:52:22.233113 RBA 8848549
GGSCI (rob01db02.localdomain as ggadmin@trg12c) 11> info replicat rtrg12c detail REPLICAT RTRG12C Last Started 2017-07-01 18:51 Status RUNNING INTEGRATED Checkpoint Lag 00:00:00 (updated 00:00:08 ago) Process ID 21206 Log Read Checkpoint File ./dirdat/trg12c/pu000000009 2017-07-01 18:53:52.817230 RBA 8848549 INTEGRATED Replicat DBLOGIN Provided, inbound server name is OGG$RTRG12C in ATTACHED state Current Log BSN value: (no data) Integrated Replicat low watermark: (no data) (All source transactions prior to this scn have been applied) Integrated Replicat high watermark: (no data) (Some source transactions between this scn and the low watermark may have been applied) Extract Source Begin End ./dirdat/trg12c/pu000000009 * Initialized * 2017-07-01 18:53 ./dirdat/trg12c/pu000000000 * Initialized * First Record ./dirdat/trg12c/pu000000000 * Initialized * First Record Current directory /u01/app/oracle/product/12.2.0.1/gghome_1 Report file /u01/app/oracle/product/12.2.0.1/gghome_1/dirrpt/RTRG12C.rpt Parameter file /u01/app/oracle/product/12.2.0.1/gghome_1/dirprm/rtrg12c.prm Checkpoint file /u01/app/oracle/product/12.2.0.1/gghome_1/dirchk/RTRG12C.cpr Process file /u01/app/oracle/product/12.2.0.1/gghome_1/dirpcs/RTRG12C.pcr Error log /u01/app/oracle/product/12.2.0.1/gghome_1/ggserr.log
Let's create a new table in the source database and verify that this table is being replicated to the target database:
-- on the source: SQL> create table hr.my_new_table as select * from dba_objects; Table created. SQL> select count(*) from hr.my_new_table; COUNT(*) ---------- 20476 -- on the target: SQL> select count(*) from hr.my_new_table; COUNT(*) ---------- 20476