In my last blog post I created a thin clone of my Swingbench database on my Oracle1 server, and mounted that thin clone to my Oracle2 server.
I also renamed the cloned ASM diskgroups SWINGDATA and FRA, to CLONESWINGDATA and CLONEFRA, as I already had ASM diskgroups with those names on Oracle2 already. As a final step I renamed the individual ASM disks within each of the cloned ASM diskgroups.
In this blog post I am going to mount and open the clone on Oracle2.
Note: There is a video for this post
I already have a Swingbench database on my Oracle2 VM. I could shut it down and then use the database to mount and open the clone, but instead I am going to create a new instance that I will use to open the clone.
I could use dbca to create a new instance, but that would create new database files, and since I already have my database files, I do not want to do that. Instead I will use srvctl to create a new blank instance:
First, let’s create two OS environment variables; NEW_DBN for my new database name, and NEW_SID for my new database SID. In this example I am setting them both to clonedb, but you can see how you could use this to set them differently if that is what you wanted to do.
[oracle@Oracle2 ~]$ export NEW_DBN=clonedb [oracle@Oracle2 ~]$ export NEW_SID=clonedb
We also need an INIT.ORA file for our new instance.
Oracle gives us a blank INIT.ORA file in $ORACLE_HOME/dbs, but it is very basic and very out of date. The comments stop around 1993, and the database compatibility defaults to 11.2. The following replaces or deletes the bits we don’t need leaving us with a stub we can use:
[oracle@Oracle2 ~]$ cat $ORACLE_HOME/dbs/init.ora | grep -v \# | grep -v compatible | grep -v db_recovery_file_dest | sed "s::${ORACLE_BASE}:g" | sed "s:ORCLXDB:${NEW_SID}XDB:g" | sed "s:ORCL:${NEW_DBN}:g" | sed "s:orcl:${NEW_SID}:g" | sed "s:flash:fast:g" > $ORACLE_HOME/dbs/init${NEW_SID}.ora
Next we need to add in the bits we replaced plus some additional required parameters not included in the sample file:
[oracle@Oracle2 ~]$ echo "compatible = '19.0.0'" >> $ORACLE_HOME/dbs/init${NEW_SID}.ora
[oracle@Oracle2 ~]$ echo "db_recovery_file_dest=+FRA" >> $ORACLE_HOME/dbs/init${NEW_SID}.ora
[oracle@Oracle2 ~]$ echo "db_recovery_file_dest_size=64G" >> $ORACLE_HOME/dbs/init${NEW_SID}.ora
[oracle@Oracle2 ~]$ echo "db_unique_name = '${NEW_SID}_${NEW_DBN}'" >> $ORACLE_HOME/dbs/init${NEW_SID}.ora
[oracle@Oracle2 ~]$ echo "enable_pluggable_database = FALSE" >> $ORACLE_HOME/dbs/init${NEW_SID}.ora
[oracle@Oracle2 ~]$ echo "filesystemio_options='SETALL'" >> $ORACLE_HOME/dbs/init${NEW_SID}.ora
[oracle@Oracle2 ~]$ echo "memory_target = 1500M" >> $ORACLE_HOME/dbs/init${NEW_SID}.ora
We’ve set the enable_pluggable_database and compatible to match what the source database needs. Other parameters such as db_unique_name and memory_target will be dictated by the server we are mounting to.
If we check what we have now, it should look something like this:
[oracle@Oracle2 ~]$ cat $ORACLE_HOME/dbs/init${NEW_SID}.ora
db_name='clonedb'
memory_target=1G
processes = 150
audit_file_dest='/u01/app/oracle/admin/clonedb/adump'
audit_trail ='db'
db_block_size=8192
db_domain=''
diagnostic_dest='/u01/app/oracle'
dispatchers='(PROTOCOL=TCP) (SERVICE=clonedbXDB)'
open_cursors=300
remote_login_passwordfile='EXCLUSIVE'
undo_tablespace='UNDOTBS1'
control_files = (ora_control1, ora_control2)
compatible = '19.0.0'
db_recovery_file_dest=+FRA
db_recovery_file_dest_size=64G
db_unique_name = 'clonedb_clonedb'
enable_pluggable_database = FALSE
filesystemio_options='SETALL'
memory_target = 1500M
We can now create our new instance:
[oracle@Oracle2 ~]$ srvctl add database -d $NEW_SID \
> -o $ORACLE_HOME \
> -n $NEW_DBN \
> -i $NEW_SID \
> -p $ORACLE_HOME/dbs/init${NEW_SID}.ora
And we can check our new instance with:
[oracle@Oracle2 ~]$ srvctl config database -d $NEW_SID Database unique name: clonedb Database name: clonedb Oracle home: /u01/app/oracle/product/19.0.0/dbhome_1 Oracle user: oracle Spfile: /u01/app/oracle/product/19.0.0/dbhome_1/dbs/initclonedb.ora Password file: Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Disk Groups: Services: OSDBA group: oinstall OSOPER group: Database instance: clonedb
Next we need to create the audit dump directory specified in the audit_file_dest parameter. Typically this is $ORACLE_BASE/admin/SID/adump. If we later plan on using Data Pump we will also need a data pump directory, and I typically also create a pfile directory here too:
[oracle@Oracle2 ~]$ mkdir -p $ORACLE_BASE/admin/$NEW_SID/adump [oracle@Oracle2 ~]$ mkdir -p $ORACLE_BASE/admin/$NEW_SID/pfile [oracle@Oracle2 ~]$ mkdir -p $ORACLE_BASE/admin/$NEW_SID/dpdump
Lastly I find it useful to add an entry to the oratab file. This just makes it simpler to source the environment using oraenv.
Now we can source our environment to the new clonedb and start the instance to a no-mount state:
[oracle@Oracle2 ~]$ . oraenv ORACLE_SID = [slob] ? clonedb The Oracle base remains unchanged with value /u01/app/oracle [oracle@Oracle2 ~]$ srvctl start database -d clonedb -o nomount
Now we can connect to the clonedb SID and check the spfile and control_files parameters:
SQL> show parameter spfile
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string
SQL> show parameter control
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
control_file_record_keep_time integer 7
control_files string /u01/app/oracle/product/19.0.0
/dbhome_1/dbs/ora_control1, /u
01/app/oracle/product/19.0.0/d
bhome_1/dbs/ora_control2
control_management_pack_access string DIAGNOSTIC+TUNING
We have a number of changes we need to make to the new instance, so to like simpler, I will create an SPFILE:
SQL> create spfile from pfile; File created.
Then stop and restart the database, back to a mount state.
The next task is to set the control files to point at the ones we cloned from Oracle1. We also need to set the db_name parameter to match what the cloned database expects. We can generate the script to make these changes by executing the following SQL on the SOURCE database on Oracle1:
SQL> select 'alter system set db_name='||name||' scope=spfile;' "-- SET DB_NAME" from v$database;
-- SET DB_NAME
------------------------------------------------
alter system set db_name=SWING scope=spfile;
SQL>
select
'alter system set control_files ='||chr(10)||chr(39)|| -- value||
regexp_replace(
regexp_replace(
regexp_replace(value, '\+SWINGDATA', '+CLONESWINGDATA' ) -- replace SWINGDATA with CLONESWINGDATA
SQL> 2 3 4 5 6 ,'\+FRA', '+CLONEFRA' ) -- replace FRA with CLONEFRA
, ', ', chr(39)||','||chr(10)||chr(39), 2)|| -- replace the comma-space with a tick,LF,comma,tick
chr(39)||chr(10)||' scope=spfile;' "-- RENAME_CONTROLFILES"
from v$parameter
where name 7 = 'control_files'; 8 9 10
-- RENAME_CONTROLFILES
--------------------------------------------------------------------------------
alter system set control_files =
'+CLONESWINGDATA/SWING/control01.ctl',
'+CLONEFRA/SWING/control02.ctl'
scope=spfile;
You can see the complete SQL commands here: generate_mount_snap_clone_commands.sql
The second SQL shown uses regular expression to replace the current ASM diskgroup names with the new ones. Since we have renamed the ASM diskgroups we need to address this change.
Copy the two commands generated and paste them on the target clonedb database on Oracle2.
SQL> alter system set db_name=SWING scope=spfile; System altered. SQL> alter system set control_files = '+CLONESWINGDATA/SWING/control01.ctl', '+CLONEFRA/SWING/control02.ctl' scope=spfile; 2 3 4 System altered.
We should now be able to shutdown, and then restart clonedb to a MOUNT state, as it is now pointing at the right control files on CLONESWINGDATA and CLONEFRA.
The next step is to rename the database files to match the new ASM diskgroup names. We can generate this script on either the source or the clone, as once the clone is mounted, the v$datafile view is readable:
et pagesize 999 set linesize 255 select 'alter database rename file '||chr(39)||name||chr(39)||' to '||chr(39)|| regexp_replace(name, '\+SWINGDATA', '+CLONESWINGDATA' )|| chr(39)||';' "-- RENAME_DATAFILES" from v$datafile; -- RENAME_DATAFILES --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- alter database rename file '+SWINGDATA/SWING/system01.dbf' to '+CLONESWINGDATA/SWING/system01.dbf'; alter database rename file '+SWINGDATA/SWING/sysaux01.dbf' to '+CLONESWINGDATA/SWING/sysaux01.dbf'; alter database rename file '+SWINGDATA/SWING/undotbs01.dbf' to '+CLONESWINGDATA/SWING/undotbs01.dbf'; alter database rename file '+SWINGDATA/SWING/DATAFILE/swingdata.266.1079536115' to '+CLONESWINGDATA/SWING/DATAFILE/swingdata.266.1079536115'; alter database rename file '+SWINGDATA/SWING/users01.dbf' to '+CLONESWINGDATA/SWING/users01.dbf';
Spool or copy the output of this script and then apply it back to clonedb.
Lastly, the redo logs need to also be renamed:
select 'alter database rename file '||chr(39)||member||chr(39)||' to '||chr(39)|| regexp_replace(member, '\+FRA', '+CLONEFRA' )|| chr(39)||';' "-- RENAME_LOGFILES" from v$logfile; -- RENAME_LOGFILES --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- alter database rename file '+FRA/SWING/ONLINELOG/group_3.377.1079535933' to '+CLONEFRA/SWING/ONLINELOG/group_3.377.1079535933'; alter database rename file '+FRA/SWING/ONLINELOG/group_2.374.1079535913' to '+CLONEFRA/SWING/ONLINELOG/group_2.374.1079535913'; alter database rename file '+FRA/SWING/ONLINELOG/group_1.373.1079535893' to '+CLONEFRA/SWING/ONLINELOG/group_1.373.1079535893'; alter database rename file '+FRA/SWING/ONLINELOG/group_4.378.1079535967' to '+CLONEFRA/SWING/ONLINELOG/group_4.378.1079535967'; alter database rename file '+FRA/SWING/ONLINELOG/group_5.379.1079535985' to '+CLONEFRA/SWING/ONLINELOG/group_5.379.1079535985';
Execute this output against clonedb, and it should now be possible to open the clonedb database.
If the source database was placed into hot backup mode before the thin clone was generated, it will be necessary to execute alter database end backup before the clonedb can be opened.
Although that was quite a lot of work, note that many of the steps shown are only needed on the first clone. Powerstore gives us the ability to refresh an existing thin clone which causes all of the ASM disks to immediately sync to the current state of the source.
The refreshed volumes do not need to be mapped or attached to the target server a second time. If you added them to MPIO on a bare metal server that too would not need to be adjusted. UDEV rules would not have to be changed. The clonedb target instance would not have to be recreated and the SPFILE would not have to be regenerated.
What would need to be re-executed in the event of a refresh would be:
- Renaming the ASM diskgroups on the target server.
- Renaming the ASM disks of the renamed diskgroups on the target server.
- Renaming the target database data files and log files.
All of this is considerably simpler than the first clone shown above.
Also note that for those looking for a much simpler solution, Dell Technologies supports the AppSync production on Powerstore, which fully automates the entire cloning process from start to finish with a simple to navigate graphical interface. AppSync supports both Oracle and Microsoft SQL Server.
