File Server, part 4: Setting up ZFS filesystems, SMB shares, NFS exports, and iSCSI targets

Posted

The next part to my file server adventure is to create a fully functioning test environment before buying hardware to make sure I can accomplish everything I’d like. In this entry I’ll focus on creating the pool, filesystems, and mount points for SMB, NFS, and iSCSI.

Creating the ZFS Storage Pool

I’ve created and attached four new virtual disks to my OpenSolaris VM, all of which are 931 GB in size. I used 931 rather than 1024 because disk manufacturers lie about how large their disks are. A 1 TB drive will usually end up with 931 GB of usable space.

Now I need to know each disk’s device name so I can tell ZFS which disks to use. To find out, I’ll use the iostat -En command.

root@sol:~# iostat -En
c3t0d0           Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 
Vendor: NECVMWar Product: VMware IDE CDR10 Revision: 1.00 Serial No:  
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 
Illegal Request: 0 Predictive Failure Analysis: 0 
c4t0d0           Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 
Vendor: VMware,  Product: VMware Virtual S Revision: 1.0  Serial No:  
Size: 8.59GB <8589934592 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 
Illegal Request: 0 Predictive Failure Analysis: 0 
c4t1d0           Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 
Vendor: VMware,  Product: VMware Virtual S Revision: 1.0  Serial No:  
Size: 999.65GB <999653638144 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 
Illegal Request: 0 Predictive Failure Analysis: 0 
c4t2d0           Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 
Vendor: VMware,  Product: VMware Virtual S Revision: 1.0  Serial No:  
Size: 999.65GB <999653638144 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 
Illegal Request: 0 Predictive Failure Analysis: 0 
c4t3d0           Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 
Vendor: VMware,  Product: VMware Virtual S Revision: 1.0  Serial No:  
Size: 999.65GB <999653638144 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 
Illegal Request: 0 Predictive Failure Analysis: 0 
c4t4d0           Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 
Vendor: VMware,  Product: VMware Virtual S Revision: 1.0  Serial No:  
Size: 999.65GB <999653638144 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 
Illegal Request: 0 Predictive Failure Analysis: 0 

Now that I have the appropriate device names for my four drives, it’s time to create the RAID-Z pool:

root@sol:~# zpool create Storage00 raidz c4t1d0 c4t2d0 c4t3d0 c4t4d0

It takes only a second or so to create the pool - pretty awesome. A quick zpool list shows me how much space comprises the pool, as well as its overall status; zpool status will give me detailed information on the status of the pool and each of its disks; and I’ll use zfs list (or df -h) to see how much space I can actually use.

root@sol:~# zpool list
NAME        SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
Storage00  3.62T   611K  3.62T     0%  ONLINE  -
rpool      7.44G  2.69G  4.75G    36%  ONLINE  -

root@sol:~# zpool status Storage00
  pool: Storage00
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        Storage00   ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            c4t1d0  ONLINE       0     0     0
            c4t2d0  ONLINE       0     0     0
            c4t3d0  ONLINE       0     0     0
            c4t4d0  ONLINE       0     0     0

errors: No known data errors

root@sol:~# zfs list Storage00
NAME        USED  AVAIL  REFER  MOUNTPOINT
Storage00  92.0K  2.67T  26.9K  /Storage00

Our storage pool ends up being 2.67 TB. This is still sufficient for my needs; I’ll end up with just over 37% of initial usage on the real server, which is quite good.

Creating and sharing filesystems

Everything’s looking peachy, so now I’ve got to create all the filesystems I outlined in my prior entry, then share them through NFS and SMB. Beyond installing and enabling OpenSSH, no further steps are necessary to enable SFTP access to these filesystems.

The zfs command makes it easy to share a filesystem as you create it. Since it automatically uses the full name of the filesystem (PoolName_FileSystemName) as the SMB share name, I’m going to specify shorter SMB share names since I have no need to include the pool name.

I’ll be specifying casesensitivity=mixed since SMB shares are case insensitive but we want to make sure the filesystem can still, as the zfs man page states, support both case sensitive and insensitive requests. I’ll also specify nbmand=on to turn on mandatory cross-protocol locking (see this document from Sun) since I’ll be accessing a lot of the same data through both NFS and SMB. You’ll notice I won’t be sharing my “Virtualization” filesystem; since it is not possible to use a ZFS filesystem as an iSCSI target, I’ll create some zvols under the Virtualization FS and use those as iSCSI targets.

So, let’s get moving:

root@sol:~# zfs create -o casesensitivity=mixed -o nbmand=on \
            -o sharesmb=name=Documents -o sharenfs=on Storage00/Documents
root@sol:~# zfs create -o casesensitivity=mixed -o nbmand=on \
            -o sharesmb=name=Development -o sharenfs=on Storage00/Development
root@sol:~# zfs create -o casesensitivity=mixed -o nbmand=on \
            -o sharesmb=name=WWW -o sharenfs=on Storage00/WWW
root@sol:~# zfs create -o casesensitivity=mixed -o nbmand=on \
            -o sharesmb=name=Software -o sharenfs=on Storage00/Software
root@sol:~# zfs create -o casesensitivity=mixed -o nbmand=on \
            -o sharesmb=name=Media -o sharenfs=on Storage00/Media
root@sol:~# zfs create -o casesensitivity=mixed -o nbmand=on \
            -o sharesmb=name=Backups -o sharenfs=on Storage00/Backups
root@sol:~# zfs create Storage00/Virtualization

A quick zfs list ensures everything was set up as intended:

root@sol:~$ zfs list
NAME                       USED  AVAIL  REFER  MOUNTPOINT
Storage00                  313K  2.67T  34.4K  /Storage00
Storage00/Backups         26.9K  2.67T  26.9K  /Storage00/Backups
Storage00/Development     26.9K  2.67T  26.9K  /Storage00/Development
Storage00/Documents       26.9K  2.67T  26.9K  /Storage00/Documents
Storage00/Media           26.9K  2.67T  26.9K  /Storage00/Media
Storage00/Software        26.9K  2.67T  26.9K  /Storage00/Software
Storage00/Virtualization  26.9K  2.67T  26.9K  /Storage00/Virtualization
Storage00/WWW             26.9K  2.67T  26.9K  /Storage00/WWW
rpool                     2.69G  4.64G    72K  /rpool
rpool/ROOT                2.43G  4.64G    18K  legacy
rpool/ROOT/opensolaris    2.43G  4.64G  2.30G  /
rpool/dump                 256M  4.64G   256M  -
rpool/export                85K  4.64G    19K  /export
rpool/export/home           66K  4.64G    19K  /export/home
rpool/export/home/brian     47K  4.64G    47K  /export/home/brian 

Excellent. Let’s also make sure everything was shared as intended:

root@sol:~# sharemgr show -vp
default nfs=()
zfs
    zfs/Storage00/Backups nfs=() smb=()
           Backups=/Storage00/Backups
    zfs/Storage00/Development nfs=() smb=()
           Development=/Storage00/Development
    zfs/Storage00/Documents nfs=() smb=()
           Documents=/Storage00/Documents
    zfs/Storage00/Media nfs=() smb=()
           Media=/Storage00/Media
    zfs/Storage00/Software nfs=() smb=()
           Software=/Storage00/Software
    zfs/Storage00/WWW nfs=() smb=()
          WWW=/Storage00/WWW

Excellent. Now I’ll need two zvols to share via iSCSI for a separate VMWare ESXi server. Each will be 100 GB. As you might imagine, zvols are expandable using pool storage.

root@sol:~# zfs create -V 100gb -o shareiscsi=on Storage00/Virtualization/ESXStore00
root@sol:~# zfs create -V 100gb -o shareiscsi=on Storage00/Virtualization/ESXStore01

Since we specified the shareiscsi=on option, iscsitadm should show two new targets:

root@sol:~# iscsitadm list target
Target: Storage00/Virtualization/ESXStore00
    iSCSI Name: iqn.1986-03.com.sun:02:38f24940-fc64-4e98-d6ee-aac1ddaa9538
    Connections: 0
Target: Storage00/Virtualization/ESXStore01
    iSCSI Name: iqn.1986-03.com.sun:02:33aa49cb-9b5e-c23b-9e7c-bdffa8fc0cc3
    Connections: 0

If you want to make your iSCSI target names a little more intuitive, you can use the iscsitadm modify command to do so.

I can’t speak for anyone else, but configuring multi protocol file sharing hasn’t been nearly this quick on other Linux/UNIX platforms I’ve used.

Still Not Finished

I’ve created all the filesystems and shares/exports on the test server, but now I need to make sure all machines on the network can read/write to and from each share. Since this will take a little bit of tweaking, I’ll save that for the next entry.

Comments

anon commented on Tuesday 23 June 2009, 5:39pm CDT:

A time-saving and admin feature:
You could have created an extra layer of ZFS filesystem, and set the common features on that filesystem, then just created filesystems below that, and they would inherit their parent's ZFS properties.

e.g.
zfs create -o casesensitivity=mixed -o nbmand=on -o sharesmb=on -o sharenfs=on Storage00/netshares
zfs create Storage00/netshares/Backups
zfs create Storage00/netshares/Development
...etc..

You would then just need to edit the SMB share names individually if you wanted:
zfs set sharesmb=name=Backups Storage00/netshares/Backups

anon commented on Tuesday 23 June 2009, 5:25pm CDT:

Thanks for some useful tips in this series. Here's one for you:
Useful feature for iSCSI sharing is sparse ZVOL - the ZVOL can be created with a larger capacity than the host disk size. The iSCSI endpoint then sees a much larger volume than is physically available. You can add more physical capacity at a later date to the parent ZFS pool using the attach command. However, you won't need to reformat your iSCSI endpoint to use the extra disk space, as it thought it was already there!

Hope that makes sense :o)
See: http://www.cuddletech.com/blog/pivot/entry.php?id=729 for a simple explanation/demo

Brian commented on Friday 26 June 2009, 10:49pm CDT:

Nice -- those are great points I should have mentioned. I had no idea nested ZFS filesystems inherited share point properties from their parents. Thanks for the tips!