<?xml version="1.0" encoding="utf-8" ?>

<rss version="2.0" 
   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
   xmlns:admin="http://webns.net/mvcb/"
   xmlns:dc="http://purl.org/dc/elements/1.1/"
   xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
   xmlns:wfw="http://wellformedweb.org/CommentAPI/"
   xmlns:content="http://purl.org/rss/1.0/modules/content/"
   >
<channel>
    
    <title>Raymond P. Burkholder - Things I Do - Proxmox</title>
    <link>http://blog.raymond.burkholder.net/</link>
    <description>In And Around Technology and The Arts</description>
    <dc:language>en</dc:language>
    <generator>Serendipity 1.7.2 - http://www.s9y.org/</generator>
    <pubDate>Sun, 12 Apr 2026 03:45:08 GMT</pubDate>

    

<item>
    <title>LXC Fresh Container Construction From Scratch for Proxmox</title>
    <link>http://blog.raymond.burkholder.net/index.php?/archives/1335-LXC-Fresh-Container-Construction-From-Scratch-for-Proxmox.html</link>
            <category>Containers</category>
            <category>Debian</category>
            <category>LXC</category>
            <category>Proxmox</category>
    
    <comments>http://blog.raymond.burkholder.net/index.php?/archives/1335-LXC-Fresh-Container-Construction-From-Scratch-for-Proxmox.html#comments</comments>
    <wfw:comment>http://blog.raymond.burkholder.net/wfwcomment.php?cid=1335</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>http://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=1335</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;There are many articles available which discuss customizing a pre-existing Proxmox Container Template.  Few, if any, discuss constructing an LXC container from scratch.  Maybe because, fundamentally, a container template is just the rootfs as tarball, so building it is quite easy:

&lt;ul&gt;
  &lt;li&gt;Build a linux based virtual machine, I use Debian&#039;s recent release
  &lt;li&gt;Install LXC and its template package
  &lt;li&gt;Construct and initialize an LXC container
  &lt;li&gt;Shut it down and and zip it up
  &lt;li&gt;Copy it over to the ProxMox template directory
  &lt;/ul&gt;

&lt;p&gt;The details:

&lt;blockquote&gt;&lt;pre&gt;
# build the linux vm - details not relevant here
# ssh into the vm, or start a command line
# install basic packages

sudo apt install --no-install-recommends lxc lxc-templates xz-utils bridge-utils wget debootstrap rsync

# basic container templates are in:
#   /usr/share/lxc/templates/ 
# for debian as well as other distributions

# create an lxc container, provide a list any additional packages

lxc-create --template debian --name trixie-template -- --release trixie --packages iputils-ping,vim-tiny

# start and attach to the container
lxc-start trixie-template
lxc-attach trixie-template

# prepare for generating template
apt clean
apt purge

# Remove SSH host keys to ensure unique keys for each clone:
rm /etc/ssh/ssh_host_*

# Empty the machine ID file:
truncate -s 0 /etc/machine-id

# clear history
unset HISTFILE
# truncate history
history -c
&gt; ~/.bash_history
# the following has a space in front to prevent inclusion in the history
 shutdown -h now

# the shutdown returns to the virtual machine&#039;s prompt
# compress the directory structure

cd /var/lib/lxc/trixie-template/

# remove /dev files as they can&#039;t be created in an unprivileged container
# an example error message if not removed:
#   tar: ./rootfs/dev/urandom: Cannot mknod: Operation not permitted
# construction of a new container will re-create the directory and files

rm ./rootfs/dev/ptmx
rm ./rootfs/dev/zero
rm ./rootfs/dev/tty3
rm ./rootfs/dev/urandom
rm ./rootfs/dev/null
rm ./rootfs/dev/tty
rm ./rootfs/dev/console
rm ./rootfs/dev/tty4
rm ./rootfs/dev/tty2
rm ./rootfs/dev/random
rm ./rootfs/dev/tty1
rm ./rootfs/dev/full

# cd into rootfs and zip the container

cd rootfs
tar --xz --acls --numeric-owner -cf /var/local/trixie-13-3-template.tar.xz ./

# the xz file can be copied over to proxmox and placed into
# /var/lib/pve/local-btrfs/template/cache/
# for use as a template for container creation
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;During the first use of lxc-create to create the original container, packages are downloaded and installed to build the container.
The packages and installation is cached for faster subsequent builds of the same container type.

&lt;p&gt;If the cache becomes stale, it can be rebuilt by using --flush-cache in a manner similar to:

&lt;blockquote&gt;&lt;pre&gt;
lxc-create --template debian --name trixie-template -- --release trixie --flush-cache --packages iputils-ping,vim-tiny,less,python-minimal
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;An existing cache can be updated with something like:

&lt;blockquote&gt;&lt;pre&gt;
sudo chroot /var/cache/lxc/debian/rootfs-trixie-amd64
apt-get update
apt-get dist-upgrade
apt-get clean
exit
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;courtesy of &lt;a href=&quot;https://www.tomechangosubanana.com/2015/updating-lxc-imagecontainer-caches/&quot; target=_blank&gt;Updating lxc image/container caches&lt;/a&gt;

&lt;p&gt;One other note, there are two package candidates for installing the &lt;a href=&quot;https://unix.stackexchange.com/questions/400351/what-are-the-differences-between-iputils-ping-and-inetutils-ping&quot; target=_blank&gt;ping utility&lt;/a&gt;:

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://packages.debian.org/trixie/iputils-ping&quot; target=_blank&gt;iputils-ping&lt;/a&gt; - native Linux ping, preferred for Debian/Linux
  &lt;li&gt;&lt;a href=&quot;https://packages.debian.org/trixie/inetutils-ping&quot; target=_blank&gt;inetutils-ping&lt;/a&gt; - general gnu version, used on a variety of posix sytstems, less preferred
  &lt;/ul&gt;

&lt;p&gt;Some fix-ups in the process:

&lt;ul&gt;
  &lt;li&gt;apt-get install less
  &lt;li&gt;dpkg-reconfigure locales
  &lt;li&gt;useradd user
  &lt;/ul&gt;

 
    </content:encoded>

    <pubDate>Fri, 27 Feb 2026 21:03:19 +0000</pubDate>
    <guid isPermaLink="false">http://blog.raymond.burkholder.net/index.php?/archives/1335-guid.html</guid>
    
</item>
<item>
    <title>Docker Installation In LXC on ProxMox</title>
    <link>http://blog.raymond.burkholder.net/index.php?/archives/1346-Docker-Installation-In-LXC-on-ProxMox.html</link>
            <category>Docker</category>
            <category>LXC</category>
            <category>Proxmox</category>
    
    <comments>http://blog.raymond.burkholder.net/index.php?/archives/1346-Docker-Installation-In-LXC-on-ProxMox.html#comments</comments>
    <wfw:comment>http://blog.raymond.burkholder.net/wfwcomment.php?cid=1346</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>http://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=1346</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;First of all, the obligatory caveat from 2023: &lt;a href=&quot;https://forum.proxmox.com/threads/updating-proxmox-breaks-docker-lxc.126720/?ref=benheater.com#post-553701&quot; target=_blank&gt;where Proxmox developers discourage running Docker in LXC&lt;/a&gt;.  Upgrades to Proxmox may break &#039;something&#039;, which will require remediation of the containers.  The relationship between Proxmox, LXC and Docker is brittle.

&lt;p&gt;I do totally agree not to install Docker directly on the Proxmox host, as Docker will conflict with many networking and functional operations.

&lt;p&gt;However, the combination of Docker in LXC is just too enticing.  What other mechanism is available to compartmentalize applications and provide GPU resources to each compartmentalized application, particularly when an application is packaged as a Docker container, without recourse for building a native LXC container of the application?  Putting LXC and Docker into a VM seems a bit &#039;heavy&#039; just for the sake of softening some brittleness.  All the same management has to take place within the VM.

&lt;p&gt;The key benefit is that devices such as one or more GPUs can be passed through to multiple LXC containers plus any nested docker containers. Otherwise, in the scenario where the GPU or PCIe device is passed through to a VM, as far as I know, it has to be dedicated to the VM.  I&#039;ve read that the devices can not be shared between a VM and LXC containers due to configuration differences between VM pass-through and LXC pass-through.

&lt;p&gt;Given the caveat, I&#039;ll see if I can make this work.  Not so easy.  Trying to run
&lt;blockquote&gt;&lt;pre&gt;
docker run --rm hello-world
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;Yields an error:
&lt;blockquote&gt;&lt;pre&gt;
docker: Error response from daemon: failed to mount /tmp/containerd-mount2030888385: 
mount source: &quot;overlay&quot;, target: &quot;/tmp/containerd-mount2030888385&quot;, 
fstype: overlay, flags: 0, 
data: &quot;
  workdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/work,
  upperdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs,
  lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/2/fs,userxattr&quot;, 
  err: permission denied
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;With an associated apparmor error in Proxmox:
&lt;blockquote&gt;&lt;pre&gt;
audit: type=1400 audit(1774803476.655:145): 
  apparmor=&quot;DENIED&quot; operation=&quot;mount&quot; class=&quot;mount&quot; info=&quot;failed perms check&quot; error=-13 
  profile=&quot;lxc-131_&lt;/var/lib/lxc&gt;&quot; 
  name=&quot;/tmp/containerd-mount2030888385/&quot; 
  pid=1480790 comm=&quot;dockerd&quot; fstype=&quot;overlay&quot; srcname=&quot;overlay&quot;
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;The simple solution is to set &lt;b&gt;nesting=1&lt;/b&gt; in the proxmox lxc options.

&lt;p&gt;The next hurdle is that it may take a couple/several minutes for the Docker file to run when the container starts up.  If so, you may see this:
&lt;blockquote&gt;&lt;pre&gt;
&gt; ps aux
root      41  0.0  0.0   2680  1808 ?    Ss   20:09   0:00 /bin/sh /usr/lib/ifupdown/wait-online.sh
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;If so, this can be disabled:
&lt;blockquote&gt;&lt;pre&gt;
systemctl disable ifupdown-wait-online.service
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;In addition, systemd-networkd-wait-online may be waiting for an interface it doesn&#039;t manage.  This will cause a startup delay of several minutes.  Use the following to add some debugging and logging
&lt;blockquote&gt;&lt;pre&gt;
systemctl edit systemd-networkd-wait-online.service

[Service]
Environment=SYSTEMD_LOG_LEVEL=debug
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;In my case, I then saw something like:
&lt;blockquote&gt;&lt;pre&gt;
root@frigate01:~# systemctl status systemd-networkd-wait-online.service
● systemd-networkd-wait-online.service - Wait for Network to be Configured

Mar 29 20:38:44 frigate01 systemd-networkd-wait-online[97]: lo: link is ignored
Mar 29 20:38:44 frigate01 systemd-networkd-wait-online[97]: vlan60: link is not managed by networkd.
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;I have used a non-standard interface name. I resolved this by updating the edit with the following:
&lt;blockquote&gt;&lt;pre&gt;
&gt; systemctl edit systemd-networkd-wait-online.service

[Service]
ExecStart=
ExecStart=/usr/lib/systemd/systemd-networkd-wait-online --interface=vlan60
#Environment=SYSTEMD_LOG_LEVEL=debug
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;The empty ExecStart line clears the original command parameters.

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.baeldung.com/linux/systemd-networkd-wait-online-service-timeout-solution&quot; target=_blank&gt;How to Fix systemd-networkd-wait-online Service Timing Out During Boot&lt;/a&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.man7.org/linux/man-pages/man8/systemd-networkd-wait-online.8.html&quot; target=_blank&gt;systemd-networkd-wait-online.service(8) — Linux manual page&lt;/a&gt;
  &lt;li&gt;
  &lt;/ul&gt;

&lt;p&gt;Some Docker commands:
&lt;blockquote&gt;&lt;pre&gt;
docker run --rm -it hello-world bash
&lt;/pre&gt;&lt;/blockquote&gt;

 
    </content:encoded>

    <pubDate>Sun, 29 Mar 2026 17:26:11 +0000</pubDate>
    <guid isPermaLink="false">http://blog.raymond.burkholder.net/index.php?/archives/1346-guid.html</guid>
    
</item>
<item>
    <title>NVidia GPU Passthrough to ProxMox LXC Container</title>
    <link>http://blog.raymond.burkholder.net/index.php?/archives/1343-NVidia-GPU-Passthrough-to-ProxMox-LXC-Container.html</link>
            <category>Proxmox</category>
    
    <comments>http://blog.raymond.burkholder.net/index.php?/archives/1343-NVidia-GPU-Passthrough-to-ProxMox-LXC-Container.html#comments</comments>
    <wfw:comment>http://blog.raymond.burkholder.net/wfwcomment.php?cid=1343</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>http://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=1343</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;&lt;a href=&quot;https://www.virtualizationhowto.com/2025/05/how-to-enable-gpu-passthrough-to-lxc-containers-in-proxmox/&quot; target=_blank&gt;How to Enable GPU Passthrough to LXC Containers in Proxmox&lt;/a&gt; indicates that the process of providing passthrough of a GPU to both an LXC container as well as a Virtual Machine is not possible as the two types of configurations conflict with each other.

&lt;p&gt;As my own preference is to run whatever possible in LXC containers, I&#039;ll summarize the configuration I used, which is an amalgamation of configurations from several sites.
&lt;p&gt;My current installation is ProxMox v9.1.6 with:

&lt;ul&gt;
  &lt;li&gt;ProArt Z890-CREATOR WIFI
  &lt;li&gt;Intel(R) Core(TM) Ultra 9 285K
  &lt;li&gt;Corsair CMP64GX5M2X6600C32 (128G  4400 MT/s) - ECC would have been nice
  &lt;li&gt;NVIDIA Corporation AD103 [GeForce RTX 4070] (rev a1)
  &lt;/ul&gt;

&lt;p&gt;In BIOS/UEFI, enable these:
&lt;ul&gt;
  &lt;li&gt;VT-d / IOMMU
  &lt;li&gt;Above 4G Decoding
  &lt;li&gt;PCIe Native Power Management (if available)
  &lt;/ul&gt;

&lt;p&gt;Proxmox kernel parameters:
&lt;blockquote&gt;&lt;pre&gt;
# /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT=&quot;quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction&quot;

update-grub
reboot
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;&lt;a href=&quot;https://www.kernel.org/doc/html/latest/driver-api/vfio.html&quot; target=_blank&gt;VFIO Binding&lt;/a&gt; - optional but recommended:
&lt;blockquote&gt;&lt;pre&gt;
# /etc/modprobe.d/vfio.conf
options vfio_iommu_type1 allow_unsafe_interrupts=1
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;Obtain Linux drivers from &lt;a href=&quot;https://www.nvidia.com/en-us/drivers/&quot; target=_blank&gt;NVidia&lt;/a&gt;.  The CUDA toolkit is not required.  Only the drivers are required in ProxMox.  Toolkits and add-ons are added within the container.

&lt;p&gt;Install the drivers:
&lt;blockquote&gt;&lt;pre&gt;
apt install build-essential
apt install pve-headers-$(uname -r)
sh NVIDIA-Linux-x86_64-595.58.03.run
# note, use the open kernel, rather than proprietary
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;Blacklist nouveau:
&lt;blockquote&gt;&lt;pre&gt;
cat &gt; /etc/modprobe.d/blacklist-nouveau.conf &lt;&lt; EOF
blacklist nouveau
options nouveau modeset=0
EOF
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;Test that the card is accessible:
&lt;blockquote&gt;&lt;pre&gt;
nvidia-smi
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;Enable &lt;a href=&quot;https://docs.nvidia.com/deploy/driver-persistence/data-persistence.html&quot; target=_blank&gt;Data Persistence&lt;/a&gt; to prevent the GPU from re-initializing with each use.
&lt;blockquote&gt;&lt;pre&gt;
nvidia-persistenced --persistence-mode
systemctl enable nvidia-persistenced
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;Then restart:
&lt;blockquote&gt;&lt;pre&gt;
reboot
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;Identify the nvidia devices requiring passthrough:
&lt;blockquote&gt;&lt;pre&gt;
root@host02:~# ls -al /dev/nvidia*
crw-rw-rw- 1 root root 195,   0 Mar 28 11:58 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Mar 28 11:58 /dev/nvidiactl
crw-rw-rw- 1 root root 505,   0 Mar 28 11:58 /dev/nvidia-uvm
crw-rw-rw- 1 root root 505,   1 Mar 28 11:58 /dev/nvidia-uvm-tools

/dev/nvidia-caps:
total 0
drwxr-xr-x  2 root root     80 Mar 28 11:58 .
drwxr-xr-x 21 root root   5060 Mar 28 11:58 ..
cr--------  1 root root 508, 1 Mar 28 11:58 nvidia-cap1
cr--r--r--  1 root root 508, 2 Mar 28 11:58 nvidia-cap2
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;Note the numbers 195, 505 and 508 in this list (yours may be different).

&lt;p&gt;Construct a container, and prior to starting, place the following into /etc/pve/lxc/&amp;lt;vmid&amp;gt;.conf (based upon the device listing above):
&lt;blockquote&gt;&lt;pre&gt;
dev0: /dev/nvidia0
dev1: /dev/nvidiactl
dev2: /dev/nvidia-modeset
dev3: /dev/nvidia-uvm
dev4: /dev/nvidia-uvm-tools
dev5: /dev/nvidia-caps/nvidia-cap1
dev6: /dev/nvidia-caps/nvidia-cap2
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;These lines are optional in the config file, one site talks about them by my container seems to work without them  (they may be an old style cgroup2 style passthrough rather than the device oriented passthrough above):
&lt;blockquote&gt;&lt;pre&gt;
lxc.cgroup2.devices.allow: c 195:* rwm
lxc.cgroup2.devices.allow: c 505:* rwm
lxc.cgroup2.devices.allow: c 508:* rwm
&lt;/pre&gt;&lt;/blockquote&gt;


&lt;p&gt;Start the container and push the driver file into the container:
&lt;blockquote&gt;&lt;pre&gt;
pct push &amp;lt;vmid&amp;gt; downloads/NVIDIA-Linux-x86_64-595.58.03.run /root/NVIDIA-Linux-x86_64-595.58.03.run
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;In the container, install the driver, minus the kernel module:
&lt;blockquote&gt;&lt;pre&gt;
apt install kmod
sh NVIDIA-Linux-x86_64-595.58.03.run --no-kernel-modules
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;Run nvidia-smi in the container to confirm the card is reachable.

&lt;p&gt;Add nvtop at the host or the container level to chart live GPU utllization:
&lt;blockquote&gt;&lt;pre&gt;
apt install nvtop
&lt;/pre&gt;&lt;/blockquote&gt;


&lt;p&gt;Additional resources:
&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.reddit.com/r/Proxmox/comments/1s629rq/complete_gpu_passthrough_guide_for_ai_workloads_t/&quot; target=_blank&gt;Complete GPU passthrough guide for AI workloads, avoid the mistakes I made so you don&#039;t have to &lt;/a&gt;
  &lt;li&gt;&lt;a href=&quot;https://forum.proxmox.com/threads/nvidia-drivers-instalation-proxmox-and-ct.156421/&quot; target=_blank&gt;[TUTORIAL] NVIDIA drivers instalation Proxmox and CT&lt;/a&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.virtualizationhowto.com/2025/05/how-to-enable-gpu-passthrough-to-lxc-containers-in-proxmox/&quot; target=_blank&gt;How to Enable GPU Passthrough to LXC Containers in Proxmox&lt;/a&gt; - contains ollama startup examples with OpenWebUI
  &lt;/ul&gt;

&lt;p&gt;Another type of test to run when pytorch is installed:
&lt;blockquote&gt;&lt;pre&gt;
python -c &quot;import torch; print(torch.cuda.is_available())&quot;
&lt;/pre&gt;&lt;/blockquote&gt;
 
    </content:encoded>

    <pubDate>Sat, 28 Mar 2026 17:26:45 +0000</pubDate>
    <guid isPermaLink="false">http://blog.raymond.burkholder.net/index.php?/archives/1343-guid.html</guid>
    
</item>
<item>
    <title>Opening ProxMox .vv files with virt-viewer (Debian &amp; Firefox) </title>
    <link>http://blog.raymond.burkholder.net/index.php?/archives/1339-Opening-ProxMox-.vv-files-with-virt-viewer-Debian-Firefox.html</link>
            <category>Proxmox</category>
    
    <comments>http://blog.raymond.burkholder.net/index.php?/archives/1339-Opening-ProxMox-.vv-files-with-virt-viewer-Debian-Firefox.html#comments</comments>
    <wfw:comment>http://blog.raymond.burkholder.net/wfwcomment.php?cid=1339</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>http://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=1339</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;If running Firefox on a Debian Linux machine, install virt-viewer:

&lt;blockquote&gt;&lt;pre&gt;
sudo apt install virt-viewer
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;Ensure the VirtIO drivers and such have been installed in the virtual machine (&lt;a href=&quot;https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/?C=M;O=D&quot; target=_blank&gt;if Windows&lt;/a&gt;) in order to provide SPICE services.

&lt;p&gt;Then, in Firefox on your workstation:

&lt;ul&gt;
  &lt;li&gt;go into about:config, and add the key &#039;network.protocol-handler.expose.virt-viewer&#039; as boolean and set to true
  &lt;li&gt;go into about:preferences, and set &quot;What should Firefox do with other files&quot; to &quot;Ask whether to open or save files&quot;.
  &lt;li&gt;in Proxmox, open a SPICE based console for a virtual machine, which attempts a download or a run of a customized .vv file,
  &lt;li&gt;Firefox will then request to open a Virt-Viewer file with Remote Viewer - at this point, you can set it as the default viewer, and it will show up in the application preferences 
  &lt;/ul&gt;
 
    </content:encoded>

    <pubDate>Sat, 14 Mar 2026 22:54:52 +0000</pubDate>
    <guid isPermaLink="false">http://blog.raymond.burkholder.net/index.php?/archives/1339-guid.html</guid>
    
</item>
<item>
    <title>apparmor=&quot;DENIED&quot; operation=&quot;mount&quot; class=&quot;mount&quot; info=&quot;failed perms check&quot; error=-13 </title>
    <link>http://blog.raymond.burkholder.net/index.php?/archives/1338-apparmorDENIED-operationmount-classmount-infofailed-perms-check-error-13.html</link>
            <category>Debian</category>
            <category>LXC</category>
            <category>Proxmox</category>
    
    <comments>http://blog.raymond.burkholder.net/index.php?/archives/1338-apparmorDENIED-operationmount-classmount-infofailed-perms-check-error-13.html#comments</comments>
    <wfw:comment>http://blog.raymond.burkholder.net/wfwcomment.php?cid=1338</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>http://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=1338</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;After following my own instructions for building my own LXC container template for ProxMox using the SID release, when the container started, the ProxMox logs would fill up with errors along the lines of:

&lt;blockquote&gt;&lt;pre&gt;
apparmor=&quot;DENIED&quot; operation=&quot;mount&quot; class=&quot;mount&quot; info=&quot;failed flags match&quot; error=-13 name=&quot;/run/credentials/systemd-journald.service/&quot; flags=&quot;rw, move&quot;
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;My Trixie template did not seem to offer up these types of errors.  LXC containers were created with the &#039;Unpriviledged Container&quot; setting to 1|yes.

&lt;p&gt;Instead of going the last resort brute force and ignorance route of using the following configuration (see &lt;a href=&quot;https://github.com/russmorefield/lxc-docker-fix&quot; target=_blank&gt;Fixing net.ipv4.ip_unprivileged_port_start and AppArmor Docker Errors in a Proxmox LXC&lt;/a&gt; for some background):

&lt;blockquote&gt;&lt;pre&gt;
lxc.apparmor.profile: unconfined
features: keyctl=1,nesting=1
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;I took a more nuanced/detailed approach.  &lt;a href=&quot;https://bobcares.com/blog/apparmor-denied-operation-mount-info-failed-flags-match-error-13/&quot; target=_blank&gt;AppArmor Denied Operation mount info failed flags match Error 13&lt;/a&gt; provided a starting point for developing a solution.

&lt;p&gt;After incrementally adding rules as new Apparmor DENIED statements occurred, this is the rule set which seems to resolve the errors.  Once the container is created, these are the rules I add to the end of /etc/pve/lxc/&amp;lt;vmid&amp;gt;.conf:

&lt;blockquote&gt;&lt;pre&gt;
lxc.apparmor.raw: mount options=(rw,move) -&gt; /run/credentials/{,**},
lxc.apparmor.raw: mount options=(ro, remount, noatime, bind) -&gt; /,
lxc.apparmor.raw: mount options=(ro, remount, bind) -&gt; /dev/,
lxc.apparmor.raw: mount options=(rw, move) -&gt; /dev/mqueue/,
lxc.apparmor.raw: mount options=(rw, move) -&gt; /tmp/,
lxc.apparmor.raw: mount options=(rw, move) -&gt; /run/systemd/mount-rootfs/proc/,
lxc.apparmor.raw: mount options=(ro, nosuid, nodev, noexec, remount, nosymfollow, bind) -&gt; /run/systemd/mount-rootfs/run/credentials/systemd-networkd.service/,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/sys/net/,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/uptime,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/slabinfo,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/meminfo,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/swaps,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/loadavg,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/cpuinfo,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/diskstats,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/stat,
lxc.apparmor.raw: userns create,
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;Restart the container, and the errors should no longer occur.

&lt;p&gt;Don&#039;t try to place statements in /var/lib/lxc/&amp;lt;vmid&amp;gt;/config as it is over-written by ProxMox upon container startup.  Rules are appended to that configuration.

&lt;p&gt;I used the following for a trixie v13.3 version of a container:

&lt;blockquote&gt;&lt;pre&gt;
lxc.apparmor.raw: mount fstype=ramfs -&gt; /dev/shm/,
lxc.apparmor.raw: mount options=(ro, nosuid, nodev, noexec, remount, nosymfollow, bind) -&gt; /dev/shm/,
lxc.apparmor.raw: mount options=(ro, remount, bind) -&gt; /dev/,
lxc.apparmor.raw: mount options=(rw, move) -&gt; /dev/mqueue/,
lxc.apparmor.raw: mount options=(rw, move) -&gt; /run/lock/,
lxc.apparmor.raw: mount options=(rw, move) -&gt; /tmp/,
lxc.apparmor.raw: mount options=(ro, remount, noatime, bind) -&gt; /,
lxc.apparmor.raw: mount options=(ro, nosuid, nodev, noexec, remount, nosymfollow, bind) -&gt; /run/systemd/mount-rootfs/run/credentials/systemd-networkd.service/,
lxc.apparmor.raw: userns create,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec) -&gt; /run/systemd/namespace-{,**},
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/sys/net/,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/uptime,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/slabinfo,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/meminfo,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/swaps,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/loadavg,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/cpuinfo,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/diskstats,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/stat,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec) -&gt; /run/systemd/unit-root/proc/,
lxc.apparmor.raw: mount options=(ro, nosuid, nodev, noexec) -&gt; /sys/kernel/config/,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec) -&gt; /sys/kernel/config/,
&lt;/pre&gt;&lt;/blockquote&gt;

 
    </content:encoded>

    <pubDate>Sat, 28 Feb 2026 23:51:54 +0000</pubDate>
    <guid isPermaLink="false">http://blog.raymond.burkholder.net/index.php?/archives/1338-guid.html</guid>
    
</item>
<item>
    <title>Sample Proxmox command to build LXC container from Template</title>
    <link>http://blog.raymond.burkholder.net/index.php?/archives/1336-Sample-Proxmox-command-to-build-LXC-container-from-Template.html</link>
            <category>Proxmox</category>
    
    <comments>http://blog.raymond.burkholder.net/index.php?/archives/1336-Sample-Proxmox-command-to-build-LXC-container-from-Template.html#comments</comments>
    <wfw:comment>http://blog.raymond.burkholder.net/wfwcomment.php?cid=1336</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>http://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=1336</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;blockquote&gt;&lt;pre&gt;
pct_id=101
pct_name=test01
pct create $pct_id /var/lib/pve/local-btrfs/template/cache/trixie-13-3-template.tar.xz  \
  -hostname $pct_name \
  -description &#039;demo build&#039; \
  -onboot 1 \
  -startup up=3 \
  -ostype debian \
  -arch amd64 \
  -cores 2 \
  -memory 1024 \
  -nameserver 10.10.10.10 -searchdomain &#039;example.com&#039; \
  -net0 name=vlan30,bridge=vmbr1,ip=dhcp,tag=30,type=veth \
  -rootfs local-btrfs:8,mountoptions=&quot;noatime;discard&quot; \
  -swap 512
&lt;/pre&gt;&lt;/blockquote&gt; 
    </content:encoded>

    <pubDate>Sat, 28 Feb 2026 04:40:48 +0000</pubDate>
    <guid isPermaLink="false">http://blog.raymond.burkholder.net/index.php?/archives/1336-guid.html</guid>
    
</item>

</channel>
</rss>
