8i | 9i | 10g | 11g | 12c | 13c | 18c | 19c | 21c | 23c | Misc | PL/SQL | SQL | RAC | WebLogic | Linux

Home » Articles » Linux » Here

Using NFS with ASM

Both ASM and NFS individually provide shared storage for use with RAC installations, but the two technologies can be combined allowing ASM to access files shared over NFS. Why would you want to do this? In most cases you probably wouldn't. If you are using NFS already, adding in the ASM layer is an unnecessary complication, but there are some situations where it might be useful:

This article shows you the steps necessary to combine NFS and ASM for your shared storage layer. I'm specifically thinking about 11gR2 upward, where ASM is part of the Grid Infrastructure installation, but the method will work for previous releases also.

The instructions in this article were correct for Oracle Linux 6 (OL6). I've been told they don't work for Oracle Linux 7 (OL7), but I've not tried them to confirm.

Related articles.

NFS Share Setup

On the central NFS server create a directory to hold the shared files.

# mkdir -p /u01/VM/nfs_shares/asm_test

Add the following line into the "/etc/exports" file. The export options are important, so don't alter them unless you understand why you are doing it.

/u01/VM/nfs_shares/asm_test               *(rw,sync,no_wdelay,insecure_locks,no_root_squash)

Make sure the NFS daemon is running.

# chkconfig nfs on
# service nfs restart

NFS Mount Setup

On each machine that needs access to the NFS file system, perform the following operations.

Create a directory as a mount point for the NFS shares.

# mkdir -p /u01/oradata
# chown oracle:oinstall /u01/oradata

Add the following line to the "/etc/fstab" file, where "nas1" should be the name of the NFS server. The mount options are important, so only alter them if you know what you are doing.

nas1:/u01/VM/nfs_shares/asm_test   /u01/oradata  nfs  rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0  0 0

Mount the file system and set the ownership.

mount /u01/oradata
chown -R oracle:oinstall /u01/oradata

Create Shared Files

We can use the "dd" command to create some empty files to present to ASM as devices. Since these are shared files this only needs to be done once regardless of whether this is a single instance or a RAC installation. Here I create 5 separate 10 Gig files.

# su - oracle
$ dd if=/dev/zero of=/u01/oradata/asm_dsk1 bs=1k count=10000000
$ dd if=/dev/zero of=/u01/oradata/asm_dsk2 bs=1k count=10000000
$ dd if=/dev/zero of=/u01/oradata/asm_dsk3 bs=1k count=10000000
$ dd if=/dev/zero of=/u01/oradata/asm_dsk4 bs=1k count=10000000
$ dd if=/dev/zero of=/u01/oradata/asm_dsk5 bs=1k count=10000000

These files can now be presented directly to ASM, similar to raw devices. During the grid installation, you are presented with the "Create ASM Disk Group" screen. Click the "Change Discovery Path" button.

Create ASM Disk Group

Enter a wildcard, like "/u01/oradata/asm*", representing the base path for the files we created previously, then click the "OK" button.

Change Disk Discovery Path

When you return the the previous screen the files we created will be listed as candidate disks. You can use them the same way you would any other disks.

Create ASM Disk Group

Using ASMLib

Using ASMLib with NFS is not supported, so only do this if you want to play around with ASMLib. Definitely don't use it for something important.

In order to use ASMLib we must use "losetup" to make the files look like block devices. Add the following lines to the "/etc/rc.local" file on all nodes so they are run on reboot. You will also need to run the first 5 lines manually from the command line as the root user before you can proceed.

/sbin/losetup /dev/loop1 /u01/oradata/asm_dsk1
/sbin/losetup /dev/loop2 /u01/oradata/asm_dsk2
/sbin/losetup /dev/loop3 /u01/oradata/asm_dsk3
/sbin/losetup /dev/loop4 /u01/oradata/asm_dsk4
/sbin/losetup /dev/loop5 /u01/oradata/asm_dsk5
/usr/sbin/oracleasm scandisks

On reboot ASMLib automatically does a scan of the disks, but this happens before the rc.local is run, so we need to include the scan here to make sure the disks are visible when ASM starts up.

Before installing ASMLib you must first determine your current kernel.

# uname -rm
2.6.18-164.el5 x86_64
#

Download the appropriate ASMLib RPMs from OTN. In this case we need:

From Oracle Linux 6 onward, the oracleasm kernel driver is built into the UEK kernel, so you don't need to install the package separately.

Install the packages using the following command.

# rpm -Uvh oracleasm*.rpm

Configure ASMLib using the following command.

# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: 
Writing Oracle ASM library driver configuration: done
#

Load the kernel module using the following command.

# /usr/sbin/oracleasm init
Loading module "oracleasm": oracleasm
Mounting ASMlib driver filesystem: /dev/oracleasm
# 

If you have any problems, run the following command to make sure you have the correct version of the driver.

# /usr/sbin/oracleasm update-driver

Mark the five shared disks as follows. This only needs to be done once, from one node.

# /usr/sbin/oracleasm createdisk DISK1 /dev/loop1
Writing disk header: done
Instantiating disk: done
# /usr/sbin/oracleasm createdisk DISK2 /dev/loop2
Writing disk header: done
Instantiating disk: done
# /usr/sbin/oracleasm createdisk DISK3 /dev/loop3
Writing disk header: done
Instantiating disk: done
# /usr/sbin/oracleasm createdisk DISK4 /dev/loop4
Writing disk header: done
Instantiating disk: done
# /usr/sbin/oracleasm createdisk DISK5 /dev/loop5
Writing disk header: done
Instantiating disk: done
# 

It is unnecessary, but we can run the "scandisks" command to refresh the ASM disk configuration.

# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
#

We can see the disk are now visible to ASM using the "listdisks" command. It's worth doing this from every node to confirm they are visible.

# /usr/sbin/oracleasm listdisks
DISK1
DISK2
DISK3
DISK4
DISK5
#

The disks are now configured and ready for the grid infrastructure installation. During the installation, instead of having to alter the disk discovery path, we are now presented with the disk we just marked as ASM disks.

Create ASM Disk Group (ASMLib)

For more information see:

Hope this helps. Regards Tim...

Back to the Top.