In my Sheepdog cluster, I have three nodes, with each node having two 1TB SSDs dedicated to the use of a ZFS file system. Each node stripes the two drives together to gain some read performance, and then Sheepdog will apply an Eraser Code redundancy scheme across the three nodes to provide a 2:1 erasure coded tolerant set (aka in this case similar to RAID5), which should yield about 4TB of useful storage space.
Creating the ZFS file system is a two step process: create a simple zpool, then apply the file system. This example uses two partitions on the same drive to prove the concept, but in real use, two whole drives should be used.
# step 1
# zpool create -o ashift=12 \ -O atime=off -O canmount=off -O compression=lz4 \ sheepdog /dev/sda7 /dev/sda8
# step 2 (this mount point is the location where sheepdog will apply its 'dog cluster format' instruction.
# zfs create sheepdog/data -o mountpoint=/var/lib/sheepdog
Followed by confirming what was defined:
# zpool status pool: sheepdog state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM sheepdog ONLINE 0 0 0 sda7 ONLINE 0 0 0 sda8 ONLINE 0 0 0 errors: No known data errors
# zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT sheepdog 7.56G 444K 7.56G - 0% 0% 1.00x ONLINE - root@sw02.d01.bm1:/home/rburkholder# zfs list NAME USED AVAIL REFER MOUNTPOINT sheepdog 408K 7.33G 96K /sheepdog sheepdog/data 96K 7.33G 96K /var/lib/sheepdog
- Oracles docs on properties
- Basic Notes
- Creation Examples
- Detailed Info
- ZIL Performance: How I Doubled Sync Write Speed: background information on how the ZIL works.
- OpenZFS Developer Summit 2017: talks and papers and videos from the 2017 Summit.
- How to Setup ZFS Filesystem on Linux with zpool Command Examples
- 2018/06/24: broad overview of how ZFS is structured on disk
- 2018/06/24: ZFSDirectoriesAndChanges