Create a storage snapshot of an Oracle 19c RAC database using ASM diskgroups with Dell PowerMax.

In this post I am going to explore creating a storage snapshot of an Oracle 19c RAC database using ASM disks from PowerMax storage.

This post is a companion to the video Dell PowerMax – Create a snapshot of an Oracle RAC database.

This blog post assumes some basic knowledge of PowerMax storage concepts. If you need a basic introduction please check out my PowerMax basics video.

These examples will use the Solutions Enabler command line method of managing the PowerMax, unlike the video which uses the Unisphere graphical interface.

In my blog post Create Oracle ASM diskgroups with Dell PowerMax and PowerPath we followed Oracle on PowerMax Best Practices to create several ASM diskgroups for our Oracle RAC database, based on several PowerMax storage groups.

In this article we are going to use the features of the PowerMax to create a storage snapshot of those ASM diskgroups, by creating a snapshot of the PowerMax storage groups. We are then going to mount the snapshot to a second host and mount the ASM diskgroups.

When we created the ASM diskgroups and corresponding PowerMax storage groups (SGs) we separated out the files into several diskgroups. For each ASM diskgroup there is a corresponding SG, laid out as follows:

Storage GroupPurpose
SWING_DATADatafiles, Tempfiles, Controlfiles
SWING_REDOOnline Redologs
SWING_FRAFlash Recovery Area

Since we did not place any files in the SWING_FRA diskgroup we do not need to include it in our snapshot, but you should check to see where your database files are located before you decide which SGs you need to make snapshot copies of. Most production databases will be more complex than the demonstration here, but the PowerMax makes snapshots simple regardless of size and complexity.

We want to make a snapshot of the PowerMax SGs swingdata_sg and swingredo_sg. Below shows the command line process using Solutions Enabler, to create snapshots we have named gctsnap_swingdata_sg and gctsnap_swingredo_sg.

[root@dsib0251]# symsnapvx -sg swingdata_sg establish -name gctsnap_swingdata_sg

Establish operation execution is in progress for the storage group swingdata_sg. Please wait...
    Polling for Establish.............................................Not Needed.
    Polling for Activate..............................................Started.
    Polling for Activate..............................................Done.
Establish operation successfully executed for the storage group swingdata_sg

[root@dsib0251]# symsnapvx -sg swingredo_sg establish -name gctsnap_swingredo_sg

Establish operation execution is in progress for the storage group swingredo_sg. Please wait...
    Polling for Establish.............................................Not Needed.
    Polling for Activate..............................................Started.
    Polling for Activate..............................................Done.
Establish operation successfully executed for the storage group swingredo_sg

We could simply have made a snapshot of the parent SG swing_sg which would have included both of the child SGs. Either approach is fine. Snapshotting the SGs individually gives the DBA more flexibility to perform a granular restore, possibly applying redo to a specific SCN. If you choose to snapshot SGs individually be sure to snapshot the SGs with the redo logs last.

DBAs may also note that we did not place the database into backup mode before taking this snapshot. Backup mode is optional when taking Oracle database snapshots on a Dell PowerMax. This is because the PowerMax supports write ordering, which preserves the order of writes from the db writers and log writers to the physical media. With write ordering, databases that are snapshot when not in backup mode will be treated, upon startup, as if they had crashed, and standard media recovery will be applied.

This approach is supported by Oracle as documented in MOS Note 604683.1 – Supported Backup, Restore and Recovery Operations using Third Party Snapshot Technologies.

Of course many DBAs may still prefer to place the database into backup mode first, and then take the database out of backup mode after. Either approach is fine.

Once we have created a snapshot we can check the PowerMax to see what we have:

[root@dsib0252 ~]# symsnapvx -devs 0014B:0014E,00153:00155 -detail list

Symmetrix ID              : 000120200304    (Microcode Version: 6079)

----------------------------------------------------------------------------------------------------------------------------------------
                                                                               Snapshot   Total
Sym                                          Flags                             Dev Size   Deltas     Non-Shared
Dev   Snapshot Name                    Gen  FLRG TSEB Snapshot Timestamp       (Tracks)   (Tracks)   (Tracks)   Expiration Date
----- -------------------------------- ---- --------- ------------------------ ---------- ---------- ---------- ------------------------
0014B gctsnap_swingdata_sg                0 .... .... Thu May 12 11:50:07 2022    8388615        256        128                       NA
      gctsnap_swingdata_sg                1 .... .... Wed May  4 13:49:33 2022    8388615       8576       1152                       NA
0014C gctsnap_swingdata_sg                0 .... .... Thu May 12 11:50:07 2022    8388615        256        128                       NA
      gctsnap_swingdata_sg                1 .... .... Wed May  4 13:49:33 2022    8388615       8704       1152                       NA
0014D gctsnap_swingdata_sg                0 .... .... Thu May 12 11:50:07 2022    8388615        256        128                       NA
      gctsnap_swingdata_sg                1 .... .... Wed May  4 13:49:33 2022    8388615       8576       1152                       NA
0014E gctsnap_swingdata_sg                0 .... .... Thu May 12 11:50:07 2022    8388615        256        256                       NA
      gctsnap_swingdata_sg                1 .... .... Wed May  4 13:49:33 2022    8388615       8576       1152                       NA
00153 gctsnap_swingredo_sg                0 .... .... Thu May 12 14:17:31 2022     409605          6          6                       NA
      gctsnap_swingredo_sg                1 .... .... Wed May  4 13:49:44 2022     409605          6          6                       NA
00154 gctsnap_swingredo_sg                0 .... .... Thu May 12 14:17:31 2022     409605          0          0                       NA
      gctsnap_swingredo_sg                1 .... .... Wed May  4 13:49:44 2022     409605          0          0                       NA
00155 gctsnap_swingredo_sg                0 .... .... Thu May 12 14:17:31 2022     409605          0          0                       NA
      gctsnap_swingredo_sg                1 .... .... Wed May  4 13:49:44 2022     409605          0          0                       NA
                                                                                          ---------- ----------
                                                                                               35468       5260



Flags:

  (F)ailed    : X = General Failure, . = No Failure
              : S = SRP Failure, R = RDP Failure, I = Establish in progress
  (L)ink      : X = Link Exists, . = No Link Exists
  (R)estore   : X = Restore Active, . = No Restore Active
  (G)CM       : X = GCM, . = Non-GCM
  (T)ype      : Z = zDP snapshot, S = Policy snapshot, C = Cloud snapshot
              : P = Persistent Policy snapshot, . = Manual snapshot
  (S)ecured   : X = Secured, . = Not Secured
  (E)xpanded  : X = Source Device Expanded, . = Source Device Not Expanded
  (B)ackground: X = Background define in progress, . = No Background define

In the above example we have queried the PowerMax to list available snapshots using the device IDs of the volumes of our ASM disks. Recall these volume IDs are shown using the symsg show command we explored in the previous blog post.

You may note that in the Gen column we see 0 and 1. This is because we have two snapshots named gctsnap_swingdata_sg or gctsnap_swingredo_sg of these devices – one taken May 4, and the other May 12. It is important to know that with PowerMax, the name of the snapshot is typically not enough to identify the copy in question – we also need to know the generation.

Multi-generation snapshots may seem peculiar at first to DBAs familiar with simpler approaches, but multi-generational snapshots allow, for example, a snapshot named month_end, and to have multiple generations of that snapshot over time.

Another feature of PowerMax snapshots that is different from some simpler approaches is that we cannot directly mount the snapshot generation seen above. Before we can use the copy, we must first prepare link devices and then link the snapshot generation to the link devices, which can then be mounted to target servers.

Link devices are really just empty volumes. We need one link device for each volume in the snapshot, and we need to create them to the same capacity as the source volumes. We can go back and check to see the size and number of the volumes we created, or we can interrogate the PowerMax and find out:

export SNAP_NAME=gctsnap_swingdata_sg

# enumerate all vols in the named SG
for sd in `symsnapvx list | grep $SNAP_NAME | awk '{print $1}'`
do 
  echo "device "$sd
  cap=`symdev show $sd 2>/dev/null | grep -iF gigabytes | head -1`
  length=${#cap}
  if [ $length -ne 0 ]
  then 
    echo 'vol '$sd' capacity '$cap
  fi
done

device 0014B
vol 0014B capacity  GigaBytes : 1024.0
device 0014C
vol 0014C capacity  GigaBytes : 1024.0
device 0014D
vol 0014D capacity  GigaBytes : 1024.0
device 0014E
vol 0014E capacity  GigaBytes : 1024.0

In the above example we have used the snapshot gctsnap_swingdata_sg and used a simple shell script to query the volume capacity and count. We can see that this snapshot comprises four volumes of 1TB each.

We will need four matching volumes to act as link devices before we can mount a copy of this snapshot to a target server. In the following example we have create four devces of 1TB each named SNAP_DATA0 through SNAP_DATA3

[root@dsib0251 ~]# symdev create -v -tdev -device_name SNAP_DATA -number 0 -starting_dev_num 0 -cap 1024 -captype gb -N 4

STARTING a TDEV Create Device operation on Symm 000120200304.
Wait for device create...............................Started.
Wait for device create...............................Done.
     The TDEV Create Device operation SUCCESSFULLY COMPLETED: 4 devices created.
4 TDEVs create requested in request 1 and devices created are 4[ 00142:00145 ]
STARTING Set/Reset device attribute operation on Symm 000120200304.
     The Device Attribute Set/Reset operations SUCCESSFULLY COMPLETED: Attributes of 1 device(s) modified.
Successful Set/Reset device(s) in req# 1: [00142]
STARTING Set/Reset device attribute operation on Symm 000120200304.
     The Device Attribute Set/Reset operations SUCCESSFULLY COMPLETED: Attributes of 1 device(s) modified.
Successful Set/Reset device(s) in req# 1: [00143]
STARTING Set/Reset device attribute operation on Symm 000120200304.
     The Device Attribute Set/Reset operations SUCCESSFULLY COMPLETED: Attributes of 1 device(s) modified.
Successful Set/Reset device(s) in req# 1: [00144]
STARTING Set/Reset device attribute operation on Symm 000120200304.
     The Device Attribute Set/Reset operations SUCCESSFULLY COMPLETED: Attributes of 1 device(s) modified.
Successful Set/Reset device(s) in req# 1: [00145]

The PowerMax created four new devices identified as 00142 through 00145.

With our four new devices, let’s add them to a new SG, which we will call swingdata_sg_lnk. This will make management simpler.

symsg create swingdata_sg_lnk

symsg -sg swingdata_sg_lnk addall -devs 00142:00145

Now we can link the snapshot to the link devices using the symsnapvx command. In the following example we are linking storage group swingdata_sg to link storage group swingdata_sg_lnk using the snapashot gctsnap_swingdata_sg generation 1 – which you can see above was taken May 4.

[root@dsib0251 ~]# symsnapvx -sg swingdata_sg -lnsg swingdata_sg_lnk -snapshot_name gctsnap_swingdata_sg link -generation 1

Link operation execution is in progress for the storage group swingdata_sg. Please wait...

    Polling for Link..................................................Started.
    Polling for Link..................................................Done.

Link operation successfully executed for the storage group swingdata_sg

We will need to repeat these steps for the gctsnap_swingredo_sg snapshot as well.

We can check to see what snapshots are linked with the symsnapvx list -linked command. In the following example we can see Sym devices 0014B through 0014E are linked to link devices 00142 through 00145 respectively using snapshot gctsnap_swingdata_sg generation 1. Sym devices 00153 through 00155 are linked to link devices 00146 through 00148 respectively using snapshot gctsnap_swingredo_sg generation 1.

[root@dsib0251 ~]# symsnapvx list -linked

Symmetrix ID              : 000120200304    (Microcode Version: 6079)

--------------------------------------------------------------------------------
Sym                                         Link  Flgs
Dev   Snapshot Name                    Gen  Dev   FCMDS Snapshot Timestamp
----- -------------------------------- ---- ----- ----- ------------------------
0014B gctsnap_swingdata_sg                1 00142 .D.X. Wed May  4 13:49:33 2022
0014C gctsnap_swingdata_sg                1 00143 .D.X. Wed May  4 13:49:33 2022
0014D gctsnap_swingdata_sg                1 00144 .D.X. Wed May  4 13:49:33 2022
0014E gctsnap_swingdata_sg                1 00145 .D.X. Wed May  4 13:49:33 2022
00153 gctsnap_swingredo_sg                1 00146 .D.X. Wed May  4 13:49:44 2022
00154 gctsnap_swingredo_sg                1 00147 .D.X. Wed May  4 13:49:44 2022
00155 gctsnap_swingredo_sg                1 00148 .D.X. Wed May  4 13:49:44 2022

Next we need to expose our link storage groups swingdata_sg_lnk and swingredo_sg_lnk to our target hosts. First let’s check to see what already exists:

[root@dsib0251 ~]# symaccess list view

Symmetrix ID          : 000120200304

Masking View Name   Initiator Group     Port Group          Storage Group
------------------- ------------------- ------------------- -------------------
OraRac_FRA          OraRACMetro         304_FC              OraRac_FRA
OraRacDB            OraRACMetro         304_FC              OraRac_DB
OraRacGI            OraRACMetro         304_FC              OraRac_GI
swing_mv            OraRACMetro         304_FC              swing_sg
swingfra_mv         OraRACMetro         304_FC              swingfra_sg

We can see that we already have a series of masking views established, including exposing the swing_sg and swingfra_sg storage groups to port group 304_FC using initiator group OraRacMetro. In our example we plan to expose our snapshot back to these same hosts:

To make our lives simpler, let’s create a parent storage group snap_sg_lnk to cover both snapdata_sg_lnk and snapredo_sg_lnk.

[root@dsib0251 ~]# symsg create snap_sg_lnk
[root@dsib0251 ~]# symsg -sg snap_sg_lnk add sg snapdata_sg_lnk,snapredo_sg_lnk

Now we can create a masking view to cover the new parent storage group:

symaccess create view -name snap_mv -pg 304_FC -ig OraRACMetro -sg snap_sg_lnk

Now that the snapshot is exposed to the target host, we can log into those target hosts and initiate a SCSI bus rescan. The sg3 utils script /usr/bin/rescan-scsi-bus.sh is a popular method to achieve this. It will need to be executed on all nodes to which you are exposing the snapshot. If you are using Powerpath for path management, then you may also have to execute the powermt check command to instruct Powerpath to look for new paths and devices to manage.

We should now be able to use ASM to scan for new devices on our target host:

[oracle@dsib0251 ~]$ asmcmd afd_scan

And now list the disks ASM has discovered:

[oracle@dsib0253 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================

SWINGDATA00                 ENABLED   /dev/emcpowerr
SWINGDATA01                 ENABLED   /dev/emcpowerq
SWINGDATA02                 ENABLED   /dev/emcpowerw
SWINGDATA03                 ENABLED   /dev/emcpowerx

SWINGREDO00                 ENABLED   /dev/emcpowert
SWINGREDO01                 ENABLED   /dev/emcpowerp
SWINGREDO02                 ENABLED   /dev/emcpoweru

The ASM labels are still the same as they were on the source system, so we can unlabel them if we require:

asmcmd afd_unlabel /dev/emcpowerr -f
asmcmd afd_unlabel /dev/emcpowerq -f
asmcmd afd_unlabel /dev/emcpowerw -f
asmcmd afd_unlabel /dev/emcpowerx -f

asmcmd afd_unlabel /dev/emcpowert -f
asmcmd afd_unlabel /dev/emcpowerp -f
asmcmd afd_unlabel /dev/emcpoweru -f

And then relabel them:

asmcmd afd_label SNAPDATA0 /dev/emcpowerr --rename
asmcmd afd_label SNAPDATA1 /dev/emcpowerq --rename
asmcmd afd_label SNAPDATA2 /dev/emcpowerw --rename
asmcmd afd_label SNAPDATA3 /dev/emcpowerx --rename

asmcmd afd_label SNAPREDO0 /dev/emcpowert --rename
asmcmd afd_label SNAPREDO1 /dev/emcpowerp --rename
asmcmd afd_label SNAPREDO2 /dev/emcpoweru --rename

These unlabel/relabel commands only have to be executed on one node of a RAC cluster. Once the ASM disks are relabeled, you can use asmcmd afd_scan on the other nodes to see the newly relabeled disks.

We can also rename the ASM diskgroups if we so choose:

[oracle@dsib0251 ~]$ renamedg dgname=SWINGDATA newdgname=SNAPDATA verbose=true check=false asm_diskstring='AFD:SNAPDATA*'

[oracle@dsib0251 ~]$ renamedg dgname=SWINGREDO newdgname=SNAPREDO verbose=true check=false asm_diskstring='AFD:SNAPREDO*'

For completeness, we can also rename the individual disks of the ASM diskgroups. The following are ASM SQL commands:

alter diskgroup SNAPDATA mount restricted;
alter diskgroup SNAPDATA rename disks all;
alter diskgroup SNAPDATA dismount;

alter diskgroup SNAPREDO mount restricted;
alter diskgroup SNAPREDO rename disks all;
alter diskgroup SNAPREDO dismount;

And now mount the newly renamed diskgroups on all nodes of our RAC cluster:

[oracle@dsib0251 ~]$ srvctl enable diskgroup -g SNAPDATA -n dsib0251,dsib0252

[oracle@dsib0251 ~]$ srvctl start diskgroup -g SNAPDATA -n dsib0251,dsib0252

Now that we have the ASM diskgroups mounted we can move to mount and open the database clone. The post Mounting and Opening an Oracle database created with a thin clone snapshot on Powerstore. contains a detailed examination of how to mount and open a database that has been cloned using storage snapshot technology.

Although there are a number of steps for the first time a snapshot is mounted in this fashion, subsequent refreshes will be much simpler. The link devices, storage groups and masking views can all be reused for any refresh, although the ASM disk/diskgroup renames will be lost, and names will revert to their source system settings.

Acknowledgements.

Sincere thanks to Yaron Dar at Dell for his generous help and allowing me time on his PowerMax lab system.

Leave a comment