Debian Stretch based packages used with libvirt in various capacities:
- qemu-kvm - main package
- libvirt-daemon-system - runs the daemon, and unfortunately, installs iptables, where I currently use nftables
- numad - multi-processor tools
- bridge-utils - to be used with Free Range Routing in an EVPN capacity
- lxc - containers
- ctop - container statistics
- Optional extras:
- python-libvirt - python library
- qemu-utils - some image commands
- virtinst
- virt-top
- qemu-guest-agent - for install in a guest
- virt-manager - graphical interface
- virt-viewer
- libvirt-dev - for custom c code
- libvirt-sanlock - custom c code for locking library
- libvirt-wireshark - troubleshooting wire format
- For related activities:
- packer
- snapper
- libguestfs-tools
- libguestfs-rescue
With Sheepdog installed, and the cluster formatted, a libvirt/qemu/kvm usable block store needs to be created. This means creating a pool. Here is the content of pool.xml:
<pool type="sheepdog"> <name>pool1</name> <source> <name>pool1</name> <host name='127.0.0.1' port='7000'/> </source> </pool>
Only the 'name' attribute seems to be used. Not sure what 'source:name' is used for. This command is run on each of the three nodes (which creates a non-persistent pool):
# virsh pool-create pool.xml
Use the following instead to create a persistent, auto-starting pool:
# virsh pool-define pool.xml # virsh pool-start pool1 # virsh pool-autostart pool1
Which results with the following:
# virsh pool-list Name State Autostart ------------------------------------------- pool1 active yes
The pool size doesn't seem to use the sheepdog pool size, but seems to be a red herring. When the volume gets associated, then things look correct.
# virsh pool-info pool1 Name: pool1 UUID: 4bf5a447-39c5-491e-9d05-9f4c4b68ff16 State: running Persistent: yes Autostart: yes Capacity: 43.96 GiB Allocation: 13.50 KiB Available: 43.96 GiB
Debugging for virsh commands can be turned on with:
export LIBVIRT_DEBUG=1 export LIBVIRT_LOG_OUTPUTS="1:file:virsh.log"
To create a volume, here is an example parameter file (vol1.xml) (as in before, it seems only the name attribute name has any importance):
<volume> <name>vol1a</name> <key>sheep/vol1</key> <source> </source> <capacity unit='bytes'>1000000000</capacity> <allocation unit='bytes'>1000000000</allocation> <target> <path>sheepdog:vol1</path> <format type='unknown'/> <permissions> <mode>00</mode> <owner>0</owner> <group>0</group> </permissions> </target> </volume>
This then creates the volume, and is associated with the previously defined pool:
virsh vol-create pool1c vol1.xml
Which then provides us with:
# virsh vol-info vol1 Name: vol1 Type: network Capacity: 953.67 MiB Allocation: 0.00 B
- Some documents:
- virsh command reference
- virtualization administration guide