<?xml version="1.0" encoding="utf-8" ?>

<rss version="2.0" 
   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
   xmlns:admin="http://webns.net/mvcb/"
   xmlns:dc="http://purl.org/dc/elements/1.1/"
   xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
   xmlns:wfw="http://wellformedweb.org/CommentAPI/"
   xmlns:content="http://purl.org/rss/1.0/modules/content/"
   >
<channel>
    
    <title>Raymond P. Burkholder - Things I Do - File Systems</title>
    <link>https://blog.raymond.burkholder.net/</link>
    <description>In And Around Technology and The Arts</description>
    <dc:language>en</dc:language>
    <generator>Serendipity 1.7.2 - http://www.s9y.org/</generator>
    <pubDate>Mon, 10 Jul 2023 00:18:39 GMT</pubDate>

    

<item>
    <title>Debian Apt Btrfs Auto-Snapshot Retrofit</title>
    <link>https://blog.raymond.burkholder.net/index.php?/archives/1238-Debian-Apt-Btrfs-Auto-Snapshot-Retrofit.html</link>
            <category>BTRFS</category>
    
    <comments>https://blog.raymond.burkholder.net/index.php?/archives/1238-Debian-Apt-Btrfs-Auto-Snapshot-Retrofit.html#comments</comments>
    <wfw:comment>https://blog.raymond.burkholder.net/wfwcomment.php?cid=1238</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=1238</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;&lt;a href=&quot;https://github.com/david-cortes/snapper-in-debian-guide&quot; target=_blank&gt;Debian with Automated Snapper Rollbacks&lt;/a&gt; is a short tutorial about setting up a Debian linux system with automated BTRFS snapshots of the system and easy rollback to previous auto-generated snapshots.  Once it&#039;s setup, it&#039;ll automatically take pre/post snapshots when you run `apt` and you can boot them from grub. 
    </content:encoded>

    <pubDate>Mon, 10 Jul 2023 00:18:39 +0000</pubDate>
    <guid isPermaLink="false">https://blog.raymond.burkholder.net/index.php?/archives/1238-guid.html</guid>
    
</item>
<item>
    <title>ZRAM</title>
    <link>https://blog.raymond.burkholder.net/index.php?/archives/1061-ZRAM.html</link>
            <category>File Systems</category>
    
    <comments>https://blog.raymond.burkholder.net/index.php?/archives/1061-ZRAM.html#comments</comments>
    <wfw:comment>https://blog.raymond.burkholder.net/wfwcomment.php?cid=1061</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=1061</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Zram&quot; target=_blank&gt;zram&lt;/a&gt;, formerly called compcache, is a Linux kernel module for creating a compressed block device in RAM, i.e. a RAM disk, but with on-the-fly “disk” compression.  Debian has a &lt;a href=&quot;https://wiki.debian.org/ZRam&quot; target=_blank&gt;init system&lt;/a&gt; file for it.

&lt;blockquote&gt;
I&#039;m one of those who use zsmalloc as a module - mainly because I use zram
as a compressing general purpose block device, not as a swap device.
I create zram0, mkfs, mount, checkout and compile code, once done -
umount, rmmod. This reduces the number of writes to SSD. Some people use
tmpfs, but zram device(-s) can be much larger in size. That&#039;s a niche use
case and I&#039;m not against the patch.
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href=&quot;https://www.cnx-software.com/2018/05/14/running-out-of-ram-in-ubuntu-enable-zram/&quot; target=_blank&gt;Running out of RAM in Ubuntu? Enable ZRAM&lt;/a&gt; -  
    </content:encoded>

    <pubDate>Fri, 10 Apr 2020 15:29:13 +0000</pubDate>
    <guid isPermaLink="false">https://blog.raymond.burkholder.net/index.php?/archives/1061-guid.html</guid>
    
</item>
<item>
    <title>Dell R610 with H310 Drive Controller</title>
    <link>https://blog.raymond.burkholder.net/index.php?/archives/1059-Dell-R610-with-H310-Drive-Controller.html</link>
            <category>Ceph</category>
    
    <comments>https://blog.raymond.burkholder.net/index.php?/archives/1059-Dell-R610-with-H310-Drive-Controller.html#comments</comments>
    <wfw:comment>https://blog.raymond.burkholder.net/wfwcomment.php?cid=1059</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=1059</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;When using some relatively inexpensive parts to build a test three server Ceph solution, it seems that drive controllers and SSD drive combinations are finicky.  Specifically, it seems that a Dell H310 controller may have issues with some or all Samsung SSD drives. 

&lt;p&gt;At &lt;a href=&quot;https://www.dell.com/community/PowerEdge-HDD-SCSI-RAID/Slow-writes-PERC-H700-and-Samsung-850-PRO-SSD/td-p/5007559&quot; target=_blank&gt;Slow writes PERC H700 and Samsung 850 PRO SSD&lt;/a&gt;, a Dell response indicates:

&lt;blockquote&gt;
The  Samsung 850 Pro SSDs are not validated or certified to work with Dell controllers and as such there is a communication mismatch between the drives and the controller at the firmware level. As a result, you are bound to realize unexpected poor Read and Write performance regardless of controller cache settings.
&lt;/blockquote&gt;

&lt;p&gt;It said that the Samsung 840 drive will work with the controllers.  With Broadcom taking over LSI, they&#039;ve removed old links, so documents are hard to find now.  In trying to follow a document trail:

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://downloads.dell.com/manuals/common/rc_h310_h710_h710p_h810_ug_en-us.pdf&quot; target=_blank&gt;Dell PowerEdge RAID Controller (PERC) H310, H710, H710P, and H810User’s Guide&lt;/a&gt; on page 9 says the H310 has an LSI2008 chipset.   I&#039;m looking at the H310 as it provides for direct passthrough to the drives with out RAID.  The card has no caching.
  &lt;li&gt;&lt;a href=&quot;https://www.servethehome.com/lsi-sas-2008-raid-controller-hba-information/&quot; target=-blank&gt;LSI SAS 2008 RAID Controller/ HBA Information&lt;/a&gt; - An H310 is similar to the LSI9211-8i
  &lt;li&gt;&lt;a href=&quot;https://www.broadcom.com/support/storage/interop-compatibility&quot; target=_blank&gt;Check Interoperability and Compatibility&lt;/a&gt;
  &lt;/ul&gt;

&lt;p&gt;To use the H310 for non-cache passthrough, it is recommended to flash the card in IT mode.  I&#039;ll try this once I obtain the card.  In the meantime, some possibly relevant links to the drivers and process:

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;http://www.yellow-bricks.com/2014/04/17/disk-controller-features-and-queue-depth/&quot; target=_blank&gt;Disk Controller features and Queue Depth?&lt;/a&gt; - from 2014, discusses queue depth and diagnostic commands.
  &lt;li&gt;&lt;a href=&quot;https://www.reddit.com/r/homelab/comments/bkxszi/flashing_the_h310_mono_mini_to_it_mode/&quot; target=_blank&gt;Flashing the H310 Mono Mini to IT mode&lt;/a&gt; has a link to probably the 
&lt;a href=&quot;https://phoxden.net/H310MM_IT.pdf&quot; target=_blank&gt;best link&lt;/a&gt; to a flashing tutorial.  But is complicated for flashing the mini H310.  From the notes:  the R610 is an 11th generation server, and when the mini H310 is reflashed to IT mode, the server probably won&#039;t reboot.
  &lt;li&gt;&lt;a href=&quot;https://www.ttl.one/2018/02/upgrade-dell-perc-h200h310-easy-way.html&quot; target=_blank&gt; Crossflashing the Dell PERC H200 and H310 to the LSI 9211-8i&lt;/a&gt; (from year 2018) implies that the H310 is the equivalent of an LSI 9211-8i.  Also, a Dell H200 and an H310 must be similar from a firmware perspective.
  &lt;li&gt;&lt;a href=&quot;http://brycv.com/blog/2012/flashing-it-firmware-to-lsi-sas9211-8i/&quot; target=_blank&gt;Flashing IT Firmware to the LSI SAS9211-8i HBA&lt;/a&gt; - 2012 - flash to a pci card
  &lt;li&gt;&lt;a href=&quot;https://tylermade.net/2017/06/27/how-to-crossflash-perc-h310-to-it-mode-lsi-9211-8i-firmware-hba-for-freenas-unraid/&quot; target=_blank&gt;How to crossflash PERC H310 to IT mode LSI 9211-8i firmware (HBA for FreeNAS, UnRAID)&lt;/a&gt; - an article from 2017 with better, more native instructions
  &lt;li&gt;&lt;a href=&quot;https://jc-lan.org/2018/05/19/flash-dell-perc-h310-to-lsi-9211-8i-it-mode-using-bios-and-uefi-method-firmware-bios/&quot; target=_blank&gt;Crossflash Dell PERC H310 to LSI 9211-8i IT Mode Using Legacy (DOS) and UEFI Method (HBA Firmware + BIOS)&lt;/a&gt;
  &lt;li&gt;&lt;a href=&quot;https://forums.servethehome.com/index.php?threads/perc-h310-lsi-9211-8i-50.4540/&quot; target=_blank&gt;PERC H310 - LSI 9211-8i - $50&lt;/a&gt; - notes on purchase, install, and upgrade of card.
  &lt;li&gt;&lt;a href=&quot;https://forums.servethehome.com/index.php?threads/problems-flashing-dell-perc-h310.26596/&quot; target=_blank&gt;Problems Flashing Dell Perc H310&lt;/a&gt; - discusses H310 in PCI slot instead
  &lt;li&gt;&lt;a href=&quot;https://www.vladan.fr/flash-dell-perc-h310-with-it-firmware/&quot; target=_blank&gt;How-to Flash Dell Perc H310 with IT Firmware To Change Queue Depth from 25 to 600&lt;/a&gt;
  &lt;li&gt;&lt;a href=&quot;https://techmattr.wordpress.com/2016/04/11/updated-sas-hba-crossflashing-or-flashing-to-it-mode-dell-perc-h200-and-h310/&quot; target=_blank&gt;Updated: SAS HBA crossflashing or flashing to IT mode, Dell Perc H200 and H310&lt;/a&gt; from 2016 where is is suggested that &quot;Integrated, Mini or Mini Mono Perc H310 do NOT try to crossflash with these steps&quot;.
  &lt;li&gt;&lt;a href=&quot;https://www.dell.com/support/home/us/en/04/drivers/driversdetails?driverid=hm7mn&quot; target=_blank&gt;DELL PERC H310/H710/H710P/H810 Controllers driver version 5.1.112.64,A00&lt;/a&gt; - a starting point for the search of appropriate drivers
  &lt;li&gt;&lt;a href=&quot;https://www.dell.com/support/home/us/en/04/drivers/driversdetails?driverid=52r3d&quot; target=_blank&gt;LINUX PERCCLI Utility For All Dell HBA/PERC Controllers&lt;/a&gt; - and the starting point for command line utilities
  &lt;li&gt;&lt;a href=&quot;https://hwraid.le-vert.net/wiki/LSIMegaRAIDSAS&quot; target=_blank&gt;LSI MegaRAID SAS&lt;/a&gt; - some notes on the CLI utilities
  &lt;li&gt;&lt;a href=&quot;https://www.mindwerks.net/2012/06/doing-battle-with-a-dell-r620-and-ubuntu/&quot; target=_blank&gt;Doing battle with a Dell R620 and Ubuntu&lt;/a&gt; - has some remarks on H310 testing.
  &lt;li&gt;&lt;a href=&quot;https://forums.servethehome.com/index.php?threads/half-decent-all-in-one-build.7423/#post-66375&quot; target=_blank&gt;H310 and Fan Noise&lt;/a&gt;
  &lt;li&gt;&lt;a href=&quot;https://us.community.samsung.com/t5/Monitors-and-Memory/Low-random-read-write-on-samsung-840-250gb-ssd/td-p/265356&quot; target=_blank&gt;Low random read/write on samsung 840 250gb ssd &lt;/a&gt; - spectre bios upgrades can destroy random read/write times
  &lt;/ul&gt;

&lt;p&gt;Drive Information:

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://techcommunity.microsoft.com/t5/storage-at-microsoft/don-t-do-it-consumer-grade-solid-state-drives-ssd-in-storage/ba-p/425914#&quot; target=_blank&gt;Don&#039;t do it: consumer-grade solid-state drives (SSD) in Storage Spaces Direct &lt;/a&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/&quot; target=_blank&gt;Ceph: how to test if your SSD is suitable as a journal device?&lt;/a&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.samsung.com/semiconductor/global.semi.static/High-Performance_Red_Hat_Ceph_Storage_Using_Samsung_NVMe_SSDs_WP_20161006-0.pdf&quot; target=_blank&gt;High-performance cluster storage for IOPS-intensive workloads&lt;/a&gt;
  &lt;/ul&gt;

&lt;p&gt;Cluster test:
&lt;blockquote&gt;
rados bench -p vm_storage 10 write -b 4M -t 16&quot;
&lt;/blockquote&gt;

&lt;p&gt;2022/04/13 - challenges

&lt;p&gt;In a Dell R620, with an LSI SAS card, it took a while to find a utility to communicate with the card.  Over the fold are some results: &lt;br /&gt;&lt;a href=&quot;https://blog.raymond.burkholder.net/index.php?/archives/1059-Dell-R610-with-H310-Drive-Controller.html#extended&quot;&gt;Continue reading &quot;Dell R610 with H310 Drive Controller&quot;&lt;/a&gt;
    </content:encoded>

    <pubDate>Thu, 26 Mar 2020 18:01:33 +0000</pubDate>
    <guid isPermaLink="false">https://blog.raymond.burkholder.net/index.php?/archives/1059-guid.html</guid>
    
</item>
<item>
    <title>Ramping Up For a New ZFS Project</title>
    <link>https://blog.raymond.burkholder.net/index.php?/archives/1031-Ramping-Up-For-a-New-ZFS-Project.html</link>
            <category>ZFS</category>
    
    <comments>https://blog.raymond.burkholder.net/index.php?/archives/1031-Ramping-Up-For-a-New-ZFS-Project.html#comments</comments>
    <wfw:comment>https://blog.raymond.burkholder.net/wfwcomment.php?cid=1031</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=1031</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/ZFS&quot; target=_blank&gt;ZFS&lt;/a&gt;
  &lt;li&gt;&lt;a href=&quot;http://open-zfs.org/wiki/OpenZFS_Developer_Summit_2019&quot; target=_blank&gt;OpenZFS Developer Summit 2019&lt;/a&gt;
  &lt;li&gt;&lt;a href=&quot;https://drive.google.com/open?id=0B_J4mRfoVJQRbDBWY0o4RmNuc2FXNDMySWZjd2t1WGlpdmkw&quot; target=_blank&gt;Storage Configurator&lt;/a&gt; - from the conference
  &lt;li&gt;&lt;a href=&quot;https://www.ixsystems.com/&quot; target=_blank&gt;iX Systems&lt;/a&gt; - storage vendor
  &lt;li&gt;&lt;a href=&quot;https://www.solaris-cookbook.eu/linux/zfs-fun-zfs-compression-deduplication-useful-data-much-memory-need-zfs-dedup/&quot; target=_blank&gt;ZFS: Fun with ZFS&lt;/a&gt; – is compression and deduplication useful for my data and how much memory do I need for zfs dedup?
  &lt;li&gt;&lt;a href=&quot;https://blog.heckel.io/2017/01/08/zfs-encryption-openzfs-zfs-on-linux/#FAQ&quot; target=_blank&gt;How-To: Using ZFS Encryption at Rest in OpenZFS (ZFS on Linux, ZFS on FreeBSD, …)&lt;/a&gt;
  &lt;li&gt;&lt;a href=&quot;http://www.open-zfs.org/wiki/Performance_tuning&quot; target=_blank&gt;Performance tuning&lt;/a&gt; - OpenZFS
  &lt;/ul&gt; 
    </content:encoded>

    <pubDate>Sun, 10 Nov 2019 18:52:51 +0000</pubDate>
    <guid isPermaLink="false">https://blog.raymond.burkholder.net/index.php?/archives/1031-guid.html</guid>
    
</item>
<item>
    <title>linux: serious corruption issue with btrfs</title>
    <link>https://blog.raymond.burkholder.net/index.php?/archives/1024-linux-serious-corruption-issue-with-btrfs.html</link>
            <category>BTRFS</category>
    
    <comments>https://blog.raymond.burkholder.net/index.php?/archives/1024-linux-serious-corruption-issue-with-btrfs.html#comments</comments>
    <wfw:comment>https://blog.raymond.burkholder.net/wfwcomment.php?cid=1024</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=1024</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;From &lt;a href=&quot;https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=940105&quot; target=_blank&gt;Debian Bug report logs - #940105&lt;/a&gt;:

&lt;blockquote&gt;
&lt;p&gt;There were some reports over the last weeks from users on linux-btrfs which
suffered from catastrophic btrfs corruption.
&lt;p&gt;The bug which is apparently a regression introduced in 5.2 has now been found[0]
an a patch is available[1].
&lt;p&gt;Since it&#039;s unclear how long it will take to be part of a stable release and when
Debian will pick this up in unstable, please consider to cherry-pick the patch.

&lt;ul&gt;
  &lt;li&gt;[0] &lt;a href=&quot;https://lore.kernel.org/linux-btrfs/9731b0e7-81f3-4ee5-6f89-b4fd8d981736@petaramesh.org/T/#m38d726b09e784f1ffbd26edf13f723f71045723e&quot; target=_blank&gt;lore.kernel.org&lt;/a&gt;
  &lt;li&gt;[1] &lt;a href=&quot;https://patchwork.kernel.org/patch/11141559/&quot; target=_blank&gt;patchwork.kernel.org&lt;/a&gt;
  &lt;/ul&gt;
&lt;/blockquote&gt; 
    </content:encoded>

    <pubDate>Sun, 15 Sep 2019 17:06:06 +0000</pubDate>
    <guid isPermaLink="false">https://blog.raymond.burkholder.net/index.php?/archives/1024-guid.html</guid>
    
</item>
<item>
    <title>Sheepdog Configuration</title>
    <link>https://blog.raymond.burkholder.net/index.php?/archives/856-Sheepdog-Configuration.html</link>
            <category>SheepDog</category>
    
    <comments>https://blog.raymond.burkholder.net/index.php?/archives/856-Sheepdog-Configuration.html#comments</comments>
    <wfw:comment>https://blog.raymond.burkholder.net/wfwcomment.php?cid=856</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=856</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;I did a custom build of the Sheepdog package (the building of which I still need to document), but the installation went along the lines of:

&lt;blockquote&gt;&lt;pre&gt;
apt install libzookeeper-mt2
apt install corosync  libcorosync-common-dev
dpkg -i /home/rburkholder/sheepdog_1.0+169.g65958e35-1_amd64.deb
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;Configuration for Sheepdog, is simple, and on a Debian Stretch machine, resides in /etc/default/sheepdog.  Mine, I think, is pretty much default for now.

&lt;blockquote&gt;&lt;pre&gt;
# start sheepdog at boot [yes|no]
START=&quot;yes&quot;

# Arguments to run the daemon with
# Options:
#  -p, --port              specify the TCP port on which to listen
#  -l, --loglevel          specify the level of logging detail
#  -d, --debug             include debug messages in the log
#  -D, --directio          use direct IO when accessing the object store
#  -z, --zone              specify the zone id
#  -c, --cluster           specify the cluster driver
DAEMON_ARGS=&quot;-b 0.0.0.0 -c corosync  -l dir=/var/log/,level=debug,format=server&quot;

# SHEEPDOG_PATH
#       Proper LSB systems will store sheepdog files in /var/lib/sheepdog.  The init script uses this directory by default.
#       The directory must be on a filesystem with xattr support.  In the case of ext3, user_xattr should be added  to  the
#       mount options.
#
#       mount -o remount,user_xattr /var/lib/shepdog
SHEEPDOG_PATH=&quot;/var/lib/sheepdog&quot;
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;For use with libvirt, there is a fix to perform on a soft link:

&lt;blockquote&gt;&lt;pre&gt;
rm /usr/sbin/collie
ln -s /usr/bin/dog /usr/sbin/collie
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;Once the corosync and sheepdog services are configured and running, sheepdog only needs one more command:  format the cluster.  I used the 
&lt;a href=&quot;https://github.com/sheepdog/sheepdog/wiki/Erasure-Code-Support&quot; target=_blank&gt;Erasure Code Support&lt;/a&gt; mechanism.  The trick here, is based upon
the directory (by default &#039;/var/lib/sheepdog&#039;) as set in initialization scripts, is where the format command gets applied.

&lt;blockquote&gt;&lt;pre&gt;
# dog cluster format -c 2:1

#   dog cluster info -v
Cluster status: running, auto-recovery enabled
Cluster store: plain with 2:1 redundancy policy
Cluster vnodes strategy: auto
Cluster vnode mode: node
Cluster created at Tue Oct 31 16:12:13 2017

Epoch Time           Version [Host:Port:V-Nodes,,,]
2017-10-31 16:12:13      1 [172.16.1.21:7000:128, 172.16.1.22:7000:128, 172.16.1.23:7000:128]
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;pre&gt;&lt;blockquote&gt;
# dog node list
  Id   Host:Port         V-Nodes       Zone
   0   172.16.1.21:7000           128  352389386
   1   172.16.1.22:7000           128  369166602
   2   172.16.1.23:7000           128  385943818
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;This simple setup provides the block storage space to be used by libvirt and the virtualization guests under its control. 
    </content:encoded>

    <pubDate>Tue, 07 Nov 2017 16:37:36 +0000</pubDate>
    <guid isPermaLink="false">https://blog.raymond.burkholder.net/index.php?/archives/856-guid.html</guid>
    
</item>
<item>
    <title>Corosync in a Three-Some</title>
    <link>https://blog.raymond.burkholder.net/index.php?/archives/855-Corosync-in-a-Three-Some.html</link>
            <category>SheepDog</category>
    
    <comments>https://blog.raymond.burkholder.net/index.php?/archives/855-Corosync-in-a-Three-Some.html#comments</comments>
    <wfw:comment>https://blog.raymond.burkholder.net/wfwcomment.php?cid=855</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=855</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;Following on from the previous article, this describes a corosync configuration for three appliances configured together in a &#039;triangle&#039;.  OSPF/BGP is running on each appliance.  With this routing configuration, I am able to apply an ip address to the loopback interface, and make each of those addresses mutually reachable from each appliance.

&lt;p&gt;I think most corosync examples make the assumption that all nodes are within the same segment.  This then suggests a multicast solution.  As I am using routing between each appliance, I need a unicast solution.

&lt;p&gt;The following is an example configuration file for the second of three nodes / appliances.  Notice the bind_addr is for the loopback address, and a complete list of all three nodes taking part in the quorum.  There is a &#039;mcastport&#039; listed, but because of &#039;transport: udpu&#039;, unicast is actually used from that port number.

 &lt;br /&gt;&lt;a href=&quot;https://blog.raymond.burkholder.net/index.php?/archives/855-Corosync-in-a-Three-Some.html#extended&quot;&gt;Continue reading &quot;Corosync in a Three-Some&quot;&lt;/a&gt;
    </content:encoded>

    <pubDate>Tue, 07 Nov 2017 15:09:15 +0000</pubDate>
    <guid isPermaLink="false">https://blog.raymond.burkholder.net/index.php?/archives/855-guid.html</guid>
    
</item>
<item>
    <title>ZFS Install Notes for use with Sheepdog</title>
    <link>https://blog.raymond.burkholder.net/index.php?/archives/852-ZFS-Install-Notes-for-use-with-Sheepdog.html</link>
            <category>ZFS</category>
    
    <comments>https://blog.raymond.burkholder.net/index.php?/archives/852-ZFS-Install-Notes-for-use-with-Sheepdog.html#comments</comments>
    <wfw:comment>https://blog.raymond.burkholder.net/wfwcomment.php?cid=852</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=852</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;In my Sheepdog cluster, I have three nodes, with each node having two 1TB SSDs dedicated to the use of a ZFS file system.  Each node stripes the two drives together to gain some read performance, and then Sheepdog will apply an Eraser Code redundancy scheme across the three nodes to provide a 2:1 erasure coded tolerant set (aka in this case similar to RAID5), which should yield about 4TB of useful storage space.  

&lt;p&gt;Creating the ZFS file system is a two step process:  create a simple zpool, then apply the file system.  This example uses two partitions on the same drive to prove the concept, but in real use, two whole drives should be used.

&lt;p&gt;# step 1	  

&lt;blockquote&gt;&lt;pre&gt;
# zpool create -o ashift=12 \
      -O atime=off -O canmount=off -O compression=lz4 \
      sheepdog /dev/sda7 /dev/sda8
&lt;/blockquote&gt;&lt;/pre&gt;

&lt;p&gt;# step 2 (this mount point is the location where sheepdog will apply its &#039;dog cluster format&#039; instruction.

&lt;blockquote&gt;&lt;pre&gt;

# zfs create sheepdog/data -o mountpoint=/var/lib/sheepdog
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;Followed by confirming what was defined:
	  
&lt;blockquote&gt;&lt;pre&gt;
# zpool status
  pool: sheepdog
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        sheepdog    ONLINE       0     0     0
          sda7      ONLINE       0     0     0
          sda8      ONLINE       0     0     0

errors: No known data errors
&lt;/blockquote&gt;&lt;/pre&gt;

&lt;blockquote&gt;&lt;pre&gt;
# zpool list
NAME       SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
sheepdog  7.56G   444K  7.56G         -     0%     0%  1.00x  ONLINE  -
root@sw02.d01.bm1:/home/rburkholder# zfs list
NAME            USED  AVAIL  REFER  MOUNTPOINT
sheepdog        408K  7.33G    96K  /sheepdog
sheepdog/data    96K  7.33G    96K  /var/lib/sheepdog
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://docs.oracle.com/cd/E19253-01/819-5461/6n7ht6r2s/index.html&quot; target=_blank&gt;Oracles docs on properties&lt;/a&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.howtoforge.com/tutorial/how-to-install-and-configure-zfs-on-debian-8-jessie/&quot; target=_blank&gt;Basic Notes&lt;/a&gt;
  &lt;li&gt;&lt;a href=&quot;https://docs.oracle.com/cd/E19253-01/819-5461/gaynr/index.html &quot; target=_blank&gt;Creation Examples&lt;/a&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/zfsonlinux/zfs/wiki/Debian-Stretch-Root-on-ZFS&quot; target=_blank&gt;Detailed Info&lt;/a&gt;
  &lt;li&gt;&lt;a href=&quot;http://open-zfs.org/w/images/c/c8/10-ZIL_performance.pdf&quot; target=_blank&gt;ZIL Performance: How I Doubled Sync
Write Speed&lt;/a&gt;: background information on how the ZIL works.
  &lt;li&gt;&lt;a href=&quot;http://open-zfs.org/wiki/OpenZFS_Developer_Summit_2017&quot; target=_blank&gt;OpenZFS Developer Summit 2017&lt;/a&gt;: talks and papers and videos from the 2017 Summit.
  &lt;li&gt;&lt;a href=&quot;http://www.thegeekstuff.com/2015/07/zfs-on-linux-zpool/&quot; target=_blank&gt;How to Setup ZFS Filesystem on Linux with zpool Command Examples&lt;/a&gt;
  &lt;li&gt;2018/06/24: &lt;a href=&quot;https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSBroadDiskStructure&quot; target=_blank&gt;broad overview of how ZFS is structured on disk&lt;/a&gt;
  &lt;li&gt;2018/06/24: &lt;a href=&quot;https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSDirectoriesAndChanges&quot; target=_blank&gt;ZFSDirectoriesAndChanges&lt;/a&gt;
  &lt;/ul&gt;
 
    </content:encoded>

    <pubDate>Mon, 06 Nov 2017 13:20:51 +0000</pubDate>
    <guid isPermaLink="false">https://blog.raymond.burkholder.net/index.php?/archives/852-guid.html</guid>
    
</item>
<item>
    <title>Sheepdog Notes</title>
    <link>https://blog.raymond.burkholder.net/index.php?/archives/849-Sheepdog-Notes.html</link>
            <category>SheepDog</category>
    
    <comments>https://blog.raymond.burkholder.net/index.php?/archives/849-Sheepdog-Notes.html#comments</comments>
    <wfw:comment>https://blog.raymond.burkholder.net/wfwcomment.php?cid=849</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=849</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;I have been looking at various distributed storage solutions, hoping to find something reliable in an open source style of solution.  Some names I&#039;ve encountered (open and closed source):

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;http://ceph.com/&quot; target=_blank&gt;Ceph&lt;/a&gt;: by some accounts, seems to be resource heavy, but at the same time, appears to be well used in the industry
  &lt;li&gt;&lt;a href=&quot;http://www.openvstorage.com/&quot; target=_blank&gt;Open vStorage&lt;/a&gt;: could be a strong contender for me, but I have a bias against Java based applications.
  &lt;li&gt;&lt;a href=&quot;http://lustre.org/&quot; target=_blank&gt;Lustre&lt;/a&gt;: I&#039;ve been watching this for quite some time, but the features didn&#039;t quite mesh with my desires
  &lt;li&gt;&lt;a href=&quot;http://www.zeta.systems/&quot; target=_blank&gt;Zeta Systems&lt;/a&gt;: a mixture of proprietary and open solutions, which almost fits in with my perceptions, and uses ZFS as the underlying hardware format
  &lt;li&gt;&lt;a href=&quot;https://github.com/sheepdog/sheepdog&quot; target=_blank&gt;SheepDog&lt;/a&gt;:  I keep coming back to looking at this.  With a version 1 release a little while ago, the developers indicate is satisfies their &#039;single point of nothing&#039; criteria, which overlap with some of my own criteria.  In addition, it appears to be resource light, horizontally scaleable, and integrates with the tools I am trying to integrate:  lxc, kvm, and libvirt.
  &lt;/ul&gt;

&lt;p&gt;As Debian doesn&#039;t have a very recent package built, I build from scratch.  Since my test environment is small, I use corosync rather than zookeeper.  Here are my build statements for a package build.  I will need to add to this to show the build statement as well as the requisite packages:

&lt;blockquote&gt;&lt;pre&gt;
apt install --no-install-recommends \
  build-essential \
  git \
  corosync corosync-dev \
  libsystemd-dev \
  autoconf \
  m4 \
  pkg-config \
  yasm \
  liburcu-dev \
  libcpg-dev \
  libcfg-dev \
  libfuse-dev \
  libcurl4-openssl-dev \
  libfcgi-dev \
  dh-make \
  devscripts \
  bash-completion \
  libzookeeper-mt-dev
git clone https://github.com/sheepdog/sheepdog.git
cd sheepdog/
git log &gt; debian/changelog
./autogen.sh
./configure --sysconfdir=/etc/sheepdog --enable-corosync --enable-sheepfs --enable-http --enable-nfs --enable-systemd
make deb
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;From &lt;a href=&quot;https://github.com/sheepdog/sheepdog/wiki/Which-Format-of-QEMU-Images-Should-I-Run&quot; target=_blank&gt;Which Format of QEMU Images Should I Run&lt;/a&gt;, there is this table:

&lt;blockquote&gt;&lt;pre&gt;
| format          | snapshot/clone  | thin-provsion  |   DISCARD    |  encryption  |  compression
 raw over file    :      NO         |      YES       |      NO      |      NO      |       NO
 raw over sheepdog:      YES        |      YES       |      YES     |      NO      |       NO
qcow2 over sheepdog:     YES        |      YES       |      YES     |      YES     |       YES
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;Some links:

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;http://events.linuxfoundation.jp/sites/events/files/slides/COJ2015_Sheepdog_20150604.pdf&quot; target=_blank&gt;Sheepdog is Ready&lt;/a&gt;: distributed block storage is turning from experiment to production use.  has performance test scenarios and background on durability, scalability, manageability, and availability (can be run with multipath scsi targets).
  &lt;/ul&gt;

&lt;p&gt;On the Sheepdog mailing list, a mechanism, other than &lt;a href=&quot;https://github.com/sheepdog/sheepdog/wiki/Sheepfs&quot; target=_blank&gt;sheepfs&lt;/a&gt;, a way to present a file system:

&lt;blockquote&gt;&lt;pre&gt;
You can do, through qemu-nbd, formatting it and mounting it.

sheepdog -&gt; qemu-nbd -&gt; /dev/nbd{x} -&gt; xfs/ext3/ext4/.. -&gt; mount

modprobe nbd
qemu-nbd sheepdog://localhost:7000/my_volume -c /dev/nbd1
# Optionally you can do the rest on a different machine using nbd-client on this step
mkfs.xfs /dev/nbd1
mount /dev/nbd1 /path/to/mount 
&lt;/pre&gt;&lt;/blockquote&gt; 
    </content:encoded>

    <pubDate>Thu, 02 Nov 2017 17:29:51 +0000</pubDate>
    <guid isPermaLink="false">https://blog.raymond.burkholder.net/index.php?/archives/849-guid.html</guid>
    
</item>
<item>
    <title>Building ZFS on Debian Stretch</title>
    <link>https://blog.raymond.burkholder.net/index.php?/archives/844-Building-ZFS-on-Debian-Stretch.html</link>
            <category>Debian</category>
            <category>ZFS</category>
    
    <comments>https://blog.raymond.burkholder.net/index.php?/archives/844-Building-ZFS-on-Debian-Stretch.html#comments</comments>
    <wfw:comment>https://blog.raymond.burkholder.net/wfwcomment.php?cid=844</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=844</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;Due to various licensing compatibility issues, which are described at 
&lt;a href=&quot;https://bits.debian.org/2016/05/what-does-it-mean-that-zfs-is-in-debian.html&quot; target=_blank&gt;What does it mean that ZFS is in Debian&lt;/a&gt; and
&lt;a href=&quot;http://blog.halon.org.uk/2016/01/on-zfs-in-debian/&quot; target=_blank&gt;On ZFS on Debian&lt;/a&gt;, source-only packages are available for ZFS on Debian Linux.  Binaries need to be &#039;self-built&#039;.  Here is my method for building those binaries as packages.

&lt;p&gt;I found some background information for building the packages in 
&lt;a href=&quot;https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=554843&quot; target=_blank&gt;Debian bug #554843&lt;/a&gt;.

&lt;p&gt;To start, add &#039;contrib&#039; to /etc/apt/sources.list and run &#039;apt update&#039;.

&lt;p&gt;There are two dkms modules which need building: the ZFS kernel module, which depends upon the Solaris Porting Layer kernel module.

&lt;p&gt;This process will need to be performed each time the kernel package gets updated or any of the related ZFS packages are updated.  This process builds the kernel modules, and could be performed on a &#039;build machine&#039;, as various extra packages get installed to support the process:

&lt;blockquote&gt;&lt;pre&gt;
apt install linux-headers-$(uname -r)
apt install dpkg-dev fakeroot debhelper
DEBIAN_FRONTEND=noninteractive apt-get -y --no-install-recommends install  spl-dkms
DEBIAN_FRONTEND=noninteractive apt-get -y --no-install-recommends install  zfsutils-linux zfs-zed  zfs-dkms
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;Packages can then be built and transported for installation on other machines:

&lt;blockquote&gt;&lt;pre&gt;
dkms mkbmdeb spl -v 0.6.5.9 --dkmsframework framework.conf --binaries-only
dkms mkbmdeb zfs -v 0.6.5.9 --dkmsframework framework.conf --binaries-only
&lt;/pre&gt;&lt;/blockquote&gt; 
    </content:encoded>

    <pubDate>Tue, 31 Oct 2017 12:38:29 +0000</pubDate>
    <guid isPermaLink="false">https://blog.raymond.burkholder.net/index.php?/archives/844-guid.html</guid>
    
</item>
<item>
    <title>BTRFS on Debian</title>
    <link>https://blog.raymond.burkholder.net/index.php?/archives/814-BTRFS-on-Debian.html</link>
            <category>BTRFS</category>
    
    <comments>https://blog.raymond.burkholder.net/index.php?/archives/814-BTRFS-on-Debian.html#comments</comments>
    <wfw:comment>https://blog.raymond.burkholder.net/wfwcomment.php?cid=814</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=814</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;Debian has a &lt;a href=&quot;https://wiki.debian.org/Btrfs&quot; target=_blank&gt;BTRFS Wiki&lt;/a&gt;.  One item there, which affected me, is that kernel 4.11 has issues and will cause corruption.  I am now on kernel 4.12.  I&#039;m not sure if having duplicated metadata would have prevented some of the pain of recovery.  To see if metadata is redundant:

&lt;blockquote&gt;&lt;pre&gt;
 btrfs fi df /
Data, single: total=14.00GiB, used=12.63GiB
System, single: total=32.00MiB, used=16.00KiB
Metadata, single: total=520.00MiB, used=317.27MiB
GlobalReserve, single: total=31.22MiB, used=0.00B
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;This is on laptop with a single ssd.  It has been written elsewhere, that even if metadata duplication is requested, the ssd may deduplicate it anyway.

&lt;p&gt;So... regular maintenance and scanning is recommended.

&lt;p&gt;For maintenance, the wiki article suggests regular defragmentation (the -t 32M is not needed since Debian 9 (Stretch):

&lt;blockquote&gt;
sudo ionice -c idle btrfs filesystem defragment -f -t 32M -r $PATH
&lt;/blockquote&gt;

&lt;p&gt;The -f  parameter is recommended for flushing after each file, particularly when there are snapshots or reflinked files.

&lt;p&gt;One way to find btrfs formatted file systems:

&lt;blockquote&gt;&lt;pre&gt;
# grep btrfs /etc/fstab
UUID=b5714bf3-eec4-431d-8e3e-6b062f7e5c55 /               btrfs   noatime,nodiratime 0       0
UUID=affc8ed9-c1c0-403d-8ba1-b8ca68d2d7d7 /var            btrfs   noatime,nodiratime 0       0
UUID=b662aa71-5b72-4028-a10a-e286c56b87cf /home/rpb     btrfs    noatime,nodiratime 0 0
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;To check for errors:

&lt;blockquote&gt;&lt;pre&gt;
# btrfs dev stats /home
[/dev/nvme0n1p2].write_io_errs    0
[/dev/nvme0n1p2].read_io_errs     0
[/dev/nvme0n1p2].flush_io_errs    0
[/dev/nvme0n1p2].corruption_errs  0
[/dev/nvme0n1p2].generation_errs  0
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;To manually initiate an online scrub and monitor status:

&lt;blockquote&gt;&lt;pre&gt;
# btrfs scrub start /mnt
scrub started on /mnt, fsid ab27f528-d417-4ff9-9eb4-b59ad940290f (pid=14535)
&lt;/pre&gt;&lt;/blockquote&gt;
&lt;pre&gt;&lt;blockquote&gt;
# btrfs scrub status /mnt
scrub status for ab27f528-d417-4ff9-9eb4-b59ad940290f
        scrub started at Sun Sep 24 19:55:56 2017, running for 00:00:10
        total bytes scrubbed: 2.08GiB with 0 errors
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;A scrub with detailed results running in foreground:

&lt;blockquote&gt;&lt;pre&gt;
# btrfs scrub start -B -d -R /
scrub device /dev/nvme0n1p2 (id 1) done
        scrub started at Sun Oct  8 12:16:54 2017 and finished after 00:00:05
        data_extents_scrubbed: 373524
        tree_extents_scrubbed: 20306
        data_bytes_scrubbed: 13566894080
        tree_bytes_scrubbed: 332693504
        read_errors: 0
        csum_errors: 0
        verify_errors: 0
        no_csum: 25579
        csum_discards: 0
        super_errors: 0
        malloc_errors: 0
        uncorrectable_errors: 0
        unverified_errors: 0
        corrected_errors: 0
        last_physical: 15590227968
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;Useful BTRFS pages:

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://wiki.archlinux.org/index.php/Btrfs&quot; target=_blank&gt;archlinux&lt;/a&gt; with an entry on doing a btrfs scrub using a timer service
  &lt;li&gt;&lt;a href=&quot;http://marc.merlins.org/perso/btrfs/post_2014-03-19_Btrfs-Tips_-Btrfs-Scrub-and-Btrfs-Filesystem-Repair.html&quot; target=_blank&gt;Marc&#039;s Public Blog - Linux Btrfs Blog Posts&lt;/a&gt;: with some entries about mouting a system with errors and bypassing checksum problems.
  &lt;li&gt;&lt;a href=&quot;https://coreos.com/os/docs/latest/btrfs-troubleshooting.html&quot; target=_blank&gt;Working with btrfs and common troubleshooting&lt;/a&gt; by the Container Linux people.
  &lt;/ul&gt; &lt;br /&gt;&lt;a href=&quot;https://blog.raymond.burkholder.net/index.php?/archives/814-BTRFS-on-Debian.html#extended&quot;&gt;Continue reading &quot;BTRFS on Debian&quot;&lt;/a&gt;
    </content:encoded>

    <pubDate>Sun, 24 Sep 2017 22:45:05 +0000</pubDate>
    <guid isPermaLink="false">https://blog.raymond.burkholder.net/index.php?/archives/814-guid.html</guid>
    
</item>
<item>
    <title>Running SmartMonTools to Regularily Check Drives on Debian</title>
    <link>https://blog.raymond.burkholder.net/index.php?/archives/813-Running-SmartMonTools-to-Regularily-Check-Drives-on-Debian.html</link>
            <category>File Systems</category>
    
    <comments>https://blog.raymond.burkholder.net/index.php?/archives/813-Running-SmartMonTools-to-Regularily-Check-Drives-on-Debian.html#comments</comments>
    <wfw:comment>https://blog.raymond.burkholder.net/wfwcomment.php?cid=813</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=813</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;There is a home page for &lt;a href=&quot;https://www.smartmontools.org/&quot; target=_blank&gt;smartmontools&lt;/a&gt;.

&lt;p&gt;Install the tools:

&lt;blockquote&gt;apt install smartmontools&lt;/blockquote&gt;

&lt;p&gt;Scan for drives:

&lt;blockquote&gt;&lt;pre&gt;
# smartctl --scan
/dev/sda -d sat # /dev/sda [SAT], ATA device
/dev/sdb -d sat # /dev/sdb [SAT], ATA device
/dev/sdf -d scsi # /dev/sdf, SCSI device
# smartctl --scan -d nvme
/dev/nvme0 -d nvme # /dev/nvme0, NVMe device
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;Check the drives for SMART support.  nvme drives don&#039;t have SMART support, but are available to the tools:

&lt;blockquote&gt;&lt;pre&gt;
# smartctl -i /dev/nvme0n1
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.12.0-2-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Number:                       Samsung SSD 960 PRO 512GB
Serial Number:                      S3EWNWAJ200309M
Firmware Version:                   1B6QCXP7
PCI Vendor/Subsystem ID:            0x144d
IEEE OUI Identifier:                0x002538
Total NVM Capacity:                 512,110,190,592 [512 GB]
Unallocated NVM Capacity:           0
Controller ID:                      2
Number of Namespaces:               1
Namespace 1 Size/Capacity:          512,110,190,592 [512 GB]
Namespace 1 Utilization:            31,038,529,536 [31.0 GB]
Namespace 1 Formatted LBA Size:     512
Local Time is:                      Sun Sep 24 18:29:43 2017 ADT
&lt;/blockquote&gt;&lt;/pre&gt;

&lt;p&gt;But many regular drives do:

&lt;blockquote&gt;&lt;pre&gt;
# smartctl -i /dev/sdf
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.12.0-2-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Samsung based SSDs
Device Model:     Samsung SSD 850 EVO 1TB
Serial Number:    S35UNX0J102403N
LU WWN Device Id: 5 002538 d419eca15
Firmware Version: EMT02B6Q
User Capacity:    1,000,204,886,016 bytes [1.00 TB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    Solid State Device
Form Factor:      2.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2, ATA8-ACS T13/1699-D revision 4c
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sun Sep 24 18:33:20 2017 ADT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;Configurations can be changed in /etc/smartd.conf.  Change the -m parameter to customize an email address.

&lt;blockquote&gt;&lt;pre&gt;
/dev/nvme0n1 -a -H -S on -d nvme -m xxx
/dev/sda     -a -H -S on -d sat -m xxx
/dev/sdb     -a -H -S on -d sat -m xxx
/dev/sdf     -a -H -S on -d sat -m xxx
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;Enable the service by uncommenting &quot;start_smartd=yes&quot; in /etc/default/smartmontools.

&lt;p&gt;Then start the service: systemctl start smartmontools

&lt;p&gt;Running a manual self test:

&lt;blockquote&gt;&lt;pre&gt;
# smartctl -t short /dev/sdb
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.12.0-2-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION ===
Sending command: &quot;Execute SMART Short self-test routine immediately in off-line mode&quot;.
Drive command &quot;Execute SMART Short self-test routine immediately in off-line mode&quot; successful.
Testing has begun.
Please wait 1 minutes for test to complete.
Test will complete after Sun Sep 24 19:24:05 2017

Use smartctl -X to abort test.
&lt;/pre&gt;&lt;/blockquote&gt;
&lt;blockquote&gt;&lt;pre&gt;
# smartctl -l selftest /dev/sdb
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.12.0-2-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Self-test routine in progress 20%        29         -
&lt;/pre&gt;&lt;/blockquote&gt;
&lt;blockquote&gt;&lt;pre&gt;
# smartctl -l selftest /dev/sdb
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.12.0-2-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%        29         -
&lt;/pre&gt;&lt;/blockquote&gt;
&lt;blockquote&gt;&lt;pre&gt;
# smartctl -l error /dev/sdb
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.12.0-2-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Error Log Version: 1
No Errors Logged
&lt;/pre&gt;&lt;/blockquote&gt;


 &lt;br /&gt;&lt;a href=&quot;https://blog.raymond.burkholder.net/index.php?/archives/813-Running-SmartMonTools-to-Regularily-Check-Drives-on-Debian.html#extended&quot;&gt;Continue reading &quot;Running SmartMonTools to Regularily Check Drives on Debian&quot;&lt;/a&gt;
    </content:encoded>

    <pubDate>Sun, 24 Sep 2017 21:30:17 +0000</pubDate>
    <guid isPermaLink="false">https://blog.raymond.burkholder.net/index.php?/archives/813-guid.html</guid>
    
</item>
<item>
    <title>Looking At Drive Information</title>
    <link>https://blog.raymond.burkholder.net/index.php?/archives/812-Looking-At-Drive-Information.html</link>
            <category>File Systems</category>
    
    <comments>https://blog.raymond.burkholder.net/index.php?/archives/812-Looking-At-Drive-Information.html#comments</comments>
    <wfw:comment>https://blog.raymond.burkholder.net/wfwcomment.php?cid=812</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=812</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;Using &lt;a href=https://www.cyberciti.biz/faq/find-hard-disk-hardware-specs-on-linux/&quot; target=_blank&gt;HowTo: Find Out Hard Disk Specs / Details on Linux&lt;/a&gt; as a starting point, here is how to obtain controller and drive information.

&lt;p&gt;Another useful link is &lt;a href=&quot;https://www.cyberciti.biz/faq/linux-checking-sas-sata-disks-behind-adaptec-raid-controllers/&quot; target=_blank&gt;Use smartctl To Check Disk Behind Adaptec RAID Controllers&lt;/a&gt;, which includes some sparse info on running drive tests. &lt;br /&gt;&lt;a href=&quot;https://blog.raymond.burkholder.net/index.php?/archives/812-Looking-At-Drive-Information.html#extended&quot;&gt;Continue reading &quot;Looking At Drive Information&quot;&lt;/a&gt;
    </content:encoded>

    <pubDate>Sun, 24 Sep 2017 21:01:48 +0000</pubDate>
    <guid isPermaLink="false">https://blog.raymond.burkholder.net/index.php?/archives/812-guid.html</guid>
    
</item>
<item>
    <title>No Excuse for not using ZFS on Debian</title>
    <link>https://blog.raymond.burkholder.net/index.php?/archives/785-No-Excuse-for-not-using-ZFS-on-Debian.html</link>
            <category>ZFS</category>
    
    <comments>https://blog.raymond.burkholder.net/index.php?/archives/785-No-Excuse-for-not-using-ZFS-on-Debian.html#comments</comments>
    <wfw:comment>https://blog.raymond.burkholder.net/wfwcomment.php?cid=785</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=785</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;With all the BTRFS bashing going on, even though it is recommended one cares about checksummed data and metadata, there wasn&#039;t an easy alternative.  That has now been solved.

&lt;p&gt;ZFS is now (ie, for some time now) available as a set of native (contrib) packages in Debian.  I will need to give that a test now.  [zfs-zed, zfsutils-linux, zfs-dkms, zfs-initramfs, zfsutils, zfs-dracut].  With the right set of packages and boot configuration, it is also possible to use zfs as a boot partition.

&lt;p&gt;Package is tracker is &lt;a href=&quot;https://tracker.debian.org/pkg/zfs-linux&quot; target=_blank&gt;zfs-linux&lt;/a&gt;.

&lt;p&gt;Primary web site is &lt;a href=&quot;http://zfsonlinux.org&quot; target=_blank&gt;zfsonlinux.org&lt;/a&gt;.  Documentation, FAQ, and a wiki can be found there.

&lt;p&gt;A quicky setup and go can be found at &lt;a href=&quot;https://www.howtoforge.com/tutorial/how-to-install-and-configure-zfs-on-debian-8-jessie/&quot; target=_blank&gt;HowtoForge&lt;/a&gt;. 
    </content:encoded>

    <pubDate>Tue, 08 Aug 2017 17:36:56 +0000</pubDate>
    <guid isPermaLink="false">https://blog.raymond.burkholder.net/index.php?/archives/785-guid.html</guid>
    
</item>
<item>
    <title>Ceph Stuff</title>
    <link>https://blog.raymond.burkholder.net/index.php?/archives/693-Ceph-Stuff.html</link>
            <category>Ceph</category>
    
    <comments>https://blog.raymond.burkholder.net/index.php?/archives/693-Ceph-Stuff.html#comments</comments>
    <wfw:comment>https://blog.raymond.burkholder.net/wfwcomment.php?cid=693</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=693</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;Some Ceph links on which I need to review:

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.hastexo.com/resources/hints-and-kinks/solid-state-drives-and-ceph-osd-journals&quot; target=_blank&gt;Solid-state drives and Ceph OSD journals&lt;/a&gt;
  &lt;li&gt;&lt;a href=&quot;https://alanxelsys.com/ceph-hands-on-guide/&quot; target=_blank&gt;AJ&#039;s Data Storage Tutorials&lt;/a&gt;
  &lt;li&gt;&lt;a href=&quot;https://arvimal.wordpress.com/category/ceph-2/&quot; target=_blank&gt;Sharding the Ceph RADOS Gateway bucket index&lt;/a&gt;: quite a few good Ceph articles
  &lt;li&gt;&lt;a href=&quot;https://arvimal.wordpress.com/2015/07/09/how-to-compact-a-ceph-monitor-store/&quot; target=_blank&gt;Compacting a Ceph monitor store&lt;/a&gt;
  &lt;li&gt;&lt;a href=&quot;http://www.stratoscale.com/blog/storage/introduction-to-ceph/&quot; target=_blank&gt;Introduction to Ceph&lt;/a&gt;: from StratoScale
  &lt;li&gt;&lt;a href=&quot;http://www.admin-magazine.com/HPC/Articles/Getting-Ready-for-the-New-Ceph-Object-Store&quot; target=_blank&gt;Ceph Jewel&lt;/a&gt;: from Admin HPC Magazine, Jewel, with CephFS, as a file system, is ready prime time.
  &lt;li&gt;&lt;a href=&quot;http://arstechnica.com/civis/viewtopic.php?f=21&amp;t=1258949&quot; target=_blank&gt;Scale Out Storage Comparisons&lt;/a&gt;
  &lt;li&gt;&lt;a href=&quot;http://ceph-users.ceph.narkive.com/xOfjm2ol/ssd-cache-tier-rbd-cache-filesystem-corruption&quot; target=_blank&gt;Use forward mode instead of writeback&lt;/a&gt;?
  &lt;li&gt;&lt;a href=&quot;http://www.slideshare.net/LarryCover/ceph-open-source-storage-software-optimizations-on-intel-architecture-for-cloud-workloads&quot; target=_blank&gt;Open Source Storage Software Optimizations on Intel® Architecture for Cloud Workloads &lt;/a&gt; on SlideShare
  &lt;li&gt;&lt;a href=&quot;https://forum.proxmox.com/threads/ceph-backup.19341/&quot; target=_blank&gt;Ceph Backup&lt;/a&gt;: blowing out the cache when backing up
  &lt;li&gt;&lt;a href=&quot;https://forum.proxmox.com/threads/flashcache-vs-cache-tiering-in-ceph.26050/&quot; target=_blank&gt;Flashcache vs Cache Tiering in Ceph&lt;/a&gt;: hints and don&#039;ts
  &lt;li&gt;&lt;a href=&quot;https://forum.proxmox.com/threads/tweaking-vzdump-with-ceph-backend.20692/&quot; target=_blank&gt;Tweaking VZDump with Ceph Backend&lt;/a&gt;:  might be solved by now
  &lt;li&gt;&lt;a href=&quot;https://01.org/node/3916&quot; target=_blank&gt;Performance Portal for Ceph&lt;/a&gt;: by Intel
  &lt;li&gt;&lt;a href=&quot;https://github.com/ceph/ceph-ansible&quot; target=_blank&gt;Ansible playbooks for Ceph&lt;/a&gt;
  &lt;li&gt;&lt;a href=&quot;http://www.sebastien-han.fr/blog/2016/05/18/Using-Ubuntu-Planning-on-using-Ceph-Jewel-Here-what-you-should-consider/&quot; target=_blank&gt;Planning on using Ceph Jewel? Here what you should consider&lt;/a&gt;: lots of useful stuff on Sébastien Han&#039;s site
  &lt;li&gt;&lt;a href=&quot;http://www.virtualtothecore.com/en/adventures-ceph-storage-part-1-introduction/&quot; target=_blank&gt;My adventures with Ceph Storage. Part 1: Introduction&lt;/a&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.linkedin.com/topic/ceph&quot; target=_blank&gt;LinkedIn: Ceph&lt;/a&gt;
  &lt;li&gt;2020/04/06 &lt;a href=&quot;https://pages.croit.io/croit/v2002/getting-started/installation.html&quot; target=_blank&gt;croit Ceph Storage Management Software&lt;/a&gt;
  &lt;/ul&gt; 
    </content:encoded>

    <pubDate>Tue, 25 Oct 2016 16:17:37 +0000</pubDate>
    <guid isPermaLink="false">https://blog.raymond.burkholder.net/index.php?/archives/693-guid.html</guid>
    
</item>

</channel>
</rss>
