4K Logical Block Size size fails on VMware ESX 5.5

With the rise of all-flash storage arrays, DBAs have been exploring the opportunity to use the native 4K sectors/4K logical block mode of flash drives.

Traditional spinning disk almost universally uses a 512-byte block, or in reality most arrays now use a 520-byte block with 8 bytes reserved for Data Integrity Field or DIF, but flash drives use a 4096-byte block instead, which many all-flash arrays now expose to the operating system if instructed to do so.

On an EMC XtremIO, the LUN block size may be selected at LUN creation time from the Logical Block Size drop-down.

LUN sector selection

Several popular operating systems including Windows Server 2012, Red Hat Enterprise Linux 6.0 and Solaris 11.1 support this new configuration.

The advantage is by writing 4K blocks, instead of 512-byte blocks, the all-flash array is not required to use a Read/Modify/Write shuffle to update 512 bytes with a 4K block. There are some modest performance benefits to doing this, but don’t expect anything radical in most cases.

However be aware that at present, with VMware ESX 5.5, the VMware hypervisor cannot work with LUNs that use a logical block size of 4K, even if they are presented as RDMs. If you try to attach native 4K LUNs to a guest OS as RDMs, the guest OS power-up will fail with:

38 (Function not implemented)

4K LUN fail

A VMware Knowledge Base article confirms this is expected behavior.

VMware Knowledge Base 2091600

When using VMware, be sure to specify Normal (512 LBs) from your XtremIO array.

Configuring DD Boost on Linux for Oracle RMAN

In this post, we are going to install the DD Boost module for Oracle RMAN.

In the previous post, Configuring DD Boost Replication for Oracle RMAN, we enabled DD Boost on the Data Domain and set up storage units that replicate automatically between sites.

In this post, we are going to install and configure the DD Boost for Oracle RMAN module so that our RMAN backups can leverage the performance benefits of DD Boost.

Continue reading

Configuring DD Boost Replication for Oracle RMAN

In this post, we are going to install the DD Boost module for Oracle RMAN.

DD Boost is an optional module that works with Data Domain and a number of applications and databases, including Oracle. DD Boost moves some of the sophisticated deduplication process from the Data Domain appliance to the database server, resulting in a dramatic reduction in network bandwidth and backup times.

A greater than fifty percent reduction in backup times for a full level zero RMAN backup is typical when switching to DD Boost although as always, your mileage may vary.

Continue reading

Configuring a YUM Repository from a Linux install disc

In this post, we will explore how to set up a YUM repository using the RPMs available on the Linux install media.

There are already a great number of good blogs on setting up YUM repositories so this is nothing especially new. But every time I install Oracle I find I have to track down one or two that cover what I need, so this is my brain dump on how to do it.

Continue reading

Using EMC Powerpath with ASMLib

ASMLib is optional software to simplify the use of Automatic Storage Management (ASM) for Oracle databases on Linux.

ASMLib ensures consistent naming of devices across RAC clusters, and also maintains permissions on devices across reboots, a feature that was important until UDEV rules were added to Linux with the 2.5 kernel.

EMC Powerpath is an advanced multi-pathing host based solution that works with EMC arrays to intelligently load balance I/O across all available paths as well as provide fault tolerance by automatically re-routing traffic around failed paths. EMC Powerpath is significantly more powerful and robust that native Linux MPIO.

Continue reading

Using KFOD to verify disks before installing Grid Infrastructure

The KFOD tool is an Oracle supplied command line tool for inspecting available disks.

Since many of the issues associated with failed RAC installs are caused by shared disk, using KFOD to ensure that ASMLib or UDEV has correctly presented disks with the correct permissions to all nodes, before launching the Grid installer can save a good deal of time and effort.

However, since it is the Grid installer that installs KFOD, this can be tricky.

In this post we show how to leverage KFOD before the Grid Infrastructure is installed:

Continue reading

Oracle 12cR1 12.1.0.1 2-node RAC on CentOS 6.4 on VMware Workstation 9 – Introduction

A couple of weeks ago, Oracle released the long awaited Oracle 12c database, with lots of exciting new features.

A couple of great blog posts have already been done on how to install this, but from what I have seen they rely on Oracle’s OVM technology and/or Oracle Enterprise Linux.

This blog post is a detailed step-by-step of Oracle 12cR1 RAC using VMware Workstation and CentOS 6.4.

Continue reading

Oracle 12cR1 12.1.0.1 2-node RAC on CentOS 6.4 on VMware Workstation 9 – Part XI

Time Required: 20 minutes

Class Materials:

  • Completed Oracle 12c install

The next step is to create a new 12c database!

The dbca is still the method used to create a new database in 12c. But before we launch the installer, we will modify the /etc/oratab file.

Continue reading

Oracle 12cR1 12.1.0.1 2-node RAC on CentOS 6.4 on VMware Workstation 9 – Part X

Time Required: 60 minutes

Class Materials:

  • Oracle 12cR1 Database software

The next step is to install the Oracle 12c database software.

The database install process is largely unchanged from the 11g installer, so this should be familiar terriroty to most DBAs.

Continue reading

Oracle 12cR1 12.1.0.1 2-node RAC on CentOS 6.4 on VMware Workstation 9 – Part IX

Oracle 12cR1 12.1.0.1 2-node RAC on CentOS 6.4 on VMware Workstation 9 – Part IX

Time Required: 60 minutes

Class Materials:

  • Oracle 12cR1 Grid Infrastructure software

Now that we have completed all the preparation steps and the grid pre-install steps, we can install the 12c Oracle Grid Infrastructure software.

The Grid Infrastructure will provide the Cluster software that allows the RAC nodes to communicate, as well as the ASM software to manage the shared disks.

Continue reading