How to mount several identical GFS volumes (same UUID) in Red Hat in a non clustered configuration

All testing performed in this article is based on the following technologies
Redhat 5 update 4
Equallogic SAN 6 Series --ISCSI environment
EMC CX4 -240 --Fiber Channel.


Consider the following scenarios:


  • Can you mount GFS Volumes in a non clustered configuration ? ---> Yes you can.
  • With all newer SAN (Storage area network) boxes a standard functionality is Snapshot Luns / Cloned Luns, many customers would like  to mount these snapshot luns on the same server as the original lun for many reasons such as recovery, read only access, performance, etc, however when they try to mount a snapshot lun whose parent lun is already mounted on the same system, you get the following error

    Errors:

    The GFS2 file system on the snapshot LUN cannot be mounted because the file system name already exists.
    GFSLUNS:Test1 already exists on the source LUN (/dev/mapper/Test1).

    [root@hiflex-nd ~]# mount -t gfs2 /dev/mapper/GFSLUNS-Test1/mnt/Testsnap1
    /sbin/mount.gfs2: error mounting /dev/mapper/GFSLUNS-lTest1/ on /mnt/Testsnap1: File exists

    GFS2: fsid=: error -17 adding sysfs fileskobject_add failed for hiflex:daten with -EEXIST, don't try to register things with the same name in the same directory. 




Solution and testing performed:
There are 2 ways around this
1)  use "lock_nolock" without any locktable name while creating the GFS Volume
2)  use a different locktable name for the cloned /snapped lun while creating the GFS Volume


Below is detailed explanation of each procedure.


Example 1 ) "lock_nolock" without any locktable name






-Allocated a 1GB Lun to an unclustered Rhel5 server
-Created a GFS Volume as follows:

# mkfs.gfs2 -p lock_nolock -t gfsvolume -j 1  /dev/VolGroup01/LogVolGFS

-Mounted the volume and created some files
-Then created a clone of the LUN in Equallogic SAN
-Used the procedures in the following link to mask the existing LVM, present the cloned LUN and change the uuids:


- was then able to see both Logical Volumes on the same server (while the original was still mounted)
- then successfully mounted the cloned logical volume without any error messages

Note : 
I did not receive the naming conflict, this is because the GFS Volume was created using the  "lock_nolock " with no locktable name.

Note for my logical volumes, the gfs superblocks do not specify a locktable:

[root@node4 mnt]# gfs2_tool sb /dev/group1/production all
  mh_magic = 0x01161970
  mh_type = 1
  mh_format = 100
  sb_fs_format = 1801
  sb_multihost_format = 1900
  sb_bsize = 4096
  sb_bsize_shift = 12
  no_formal_ino = 2
  no_addr = 23
  no_formal_ino = 1
  no_addr = 22
  sb_lockproto = lock_nolock
  sb_locktable =

 [root@node4 mnt]# gfs2_tool sb /dev/group1_clone/production all
  mh_magic = 0x01161970
  mh_type = 1
  mh_format = 100
  sb_fs_format = 1801
  sb_multihost_format = 1900
  sb_bsize = 4096
  sb_bsize_shift = 12
  no_formal_ino = 2
  no_addr = 23
  no_formal_ino = 1
  no_addr = 22
  sb_lockproto = lock_nolock
  sb_locktable =

Example 2) Using a different name for the cloned/ snapped locktable.



To simulate the above errors, performed the following steps by adding locking table names to both GFS volumes:

[root@node4 mnt]# gfs2_tool sb /dev/group1/production table gfsvolume
You shouldn't change any of these values if the filesystem is mounted.

Are you sure? [y/n] y

current lock table name = ""
new lock table name = "gfsvolume"
Done

[root@node4 mnt]# gfs2_tool sb /dev/group1/production all
  mh_magic = 0x01161970
  mh_type = 1
  mh_format = 100
  sb_fs_format = 1801
  sb_multihost_format = 1900
  sb_bsize = 4096
  sb_bsize_shift = 12
  no_formal_ino = 2
  no_addr = 23
  no_formal_ino = 1
  no_addr = 22
  sb_lockproto = lock_nolock
  sb_locktable = gfsvolume

[root@node4 mnt]# gfs2_tool sb /dev/group1_clone/production table gfsvolume
You shouldn't change any of these values if the filesystem is mounted.

Are you sure? [y/n] y

current lock table name = ""
new lock table name = "gfsvolume"
Done

[root@node4 mnt]# gfs2_tool sb /dev/group1_clone/production all
  mh_magic = 0x01161970
  mh_type = 1
  mh_format = 100
  sb_fs_format = 1801
  sb_multihost_format = 1900
  sb_bsize = 4096
  sb_bsize_shift = 12
  no_formal_ino = 2
  no_addr = 23
  no_formal_ino = 1
  no_addr = 22
  sb_lockproto = lock_nolock
  sb_locktable = gfsvolume


Both volumes now had the same locking table, trying to mount them both resulted in the following:

[root@node4 mnt]# mount /dev/group1/production gfs  <SUCCESS>

[root@node4 mnt]# mount /dev/group1_clone/production gfsclone/
/sbin/mount.gfs2: error 17 mounting /dev/mapper/group1_clone-production on /mnt/gfsclone  <FAILURE>


To address the issue, I then changed the name on the cloned GFS volume as follows:

[root@node4 mnt]# gfs2_tool sb /dev/group1_clone/production table gfsvolume_clone
You shouldn't change any of these values if the filesystem is mounted.

Are you sure? [y/n] y

current lock table name = "gfsvolume"
new lock table name = "gfsvolume_clone"
Done

[root@node4 mnt]# gfs2_tool sb /dev/group1_clone/production all
  mh_magic = 0x01161970
  mh_type = 1
  mh_format = 100
  sb_fs_format = 1801
  sb_multihost_format = 1900
  sb_bsize = 4096
  sb_bsize_shift = 12
  no_formal_ino = 2
  no_addr = 23
  no_formal_ino = 1
  no_addr = 22
  sb_lockproto = lock_nolock
  sb_locktable = gfsvolume_clone

[root@node4 mnt]# mount /dev/group1_clone/production gfsclone/

[root@node4 mnt]# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/xvda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/dev/mapper/group1-production on /mnt/gfs type gfs2 (rw,localflocks,localcaching)
/dev/mapper/group1_clone-production on /mnt/gfsclone type gfs2 (rw,localflocks,localcaching)
[root@node4 mnt]#


After changing the name of the locking table, I was able to mount the cloned GFS volume ok.

This process is summarized in the following KB:





Hope this helps.
Huzeifa Bhai

1 comment: