<?xml version="1.0" encoding="utf-8" ?>

<rss version="2.0" 
   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
   xmlns:admin="http://webns.net/mvcb/"
   xmlns:dc="http://purl.org/dc/elements/1.1/"
   xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
   xmlns:wfw="http://wellformedweb.org/CommentAPI/"
   xmlns:content="http://purl.org/rss/1.0/modules/content/"
   >
<channel>
    
    <title>Raymond P. Burkholder - Things I Do</title>
    <link>http://blog.raymond.burkholder.net/</link>
    <description>In And Around Technology and The Arts</description>
    <dc:language>en</dc:language>
    <generator>Serendipity 1.7.2 - http://www.s9y.org/</generator>
    <pubDate>Tue, 03 Mar 2026 02:43:19 GMT</pubDate>

    

<item>
    <title>apparmor=&quot;DENIED&quot; operation=&quot;mount&quot; class=&quot;mount&quot; info=&quot;failed perms check&quot; error=-13 </title>
    <link>http://blog.raymond.burkholder.net/index.php?/archives/1338-apparmorDENIED-operationmount-classmount-infofailed-perms-check-error-13.html</link>
            <category>Debian</category>
            <category>LXC</category>
            <category>Proxmox</category>
    
    <comments>http://blog.raymond.burkholder.net/index.php?/archives/1338-apparmorDENIED-operationmount-classmount-infofailed-perms-check-error-13.html#comments</comments>
    <wfw:comment>http://blog.raymond.burkholder.net/wfwcomment.php?cid=1338</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>http://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=1338</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;After following my own instructions for building my own LXC container template for ProxMox using the SID release, when the container started, the ProxMox logs would fill up with errors along the lines of:

&lt;blockquote&gt;&lt;pre&gt;
apparmor=&quot;DENIED&quot; operation=&quot;mount&quot; class=&quot;mount&quot; info=&quot;failed flags match&quot; error=-13 name=&quot;/run/credentials/systemd-journald.service/&quot; flags=&quot;rw, move&quot;
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;My Trixie template did not seem to offer up these types of errors.  LXC containers were created with the &#039;Unpriviledged Container&quot; setting to 1|yes.

&lt;p&gt;Instead of going the last resort brute force and ignorance route of using the following configuration (see &lt;a href=&quot;https://github.com/russmorefield/lxc-docker-fix&quot; target=_blank&gt;Fixing net.ipv4.ip_unprivileged_port_start and AppArmor Docker Errors in a Proxmox LXC&lt;/a&gt; for some background):

&lt;blockquote&gt;&lt;pre&gt;
lxc.apparmor.profile: unconfined
features: keyctl=1,nesting=1
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;I took a more nuanced/detailed approach.  &lt;a href=&quot;https://bobcares.com/blog/apparmor-denied-operation-mount-info-failed-flags-match-error-13/&quot; target=_blank&gt;AppArmor Denied Operation mount info failed flags match Error 13&lt;/a&gt; provided a starting point for developing a solution.

&lt;p&gt;After incrementally adding rules as new Apparmor DENIED statements occurred, this is the rule set which seems to resolve the errors.  Once the container is created, these are the rules I add to the end of /etc/pve/lxc/&amp;lt;vmid&amp;gt;.conf:

&lt;blockquote&gt;&lt;pre&gt;
lxc.apparmor.raw: mount options=(rw,move) -&gt; /run/credentials/{,**},
lxc.apparmor.raw: mount options=(ro, remount, noatime, bind) -&gt; /,
lxc.apparmor.raw: mount options=(ro, remount, bind) -&gt; /dev/,
lxc.apparmor.raw: mount options=(rw, move) -&gt; /dev/mqueue/,
lxc.apparmor.raw: mount options=(rw, move) -&gt; /tmp/,
lxc.apparmor.raw: mount options=(rw, move) -&gt; /run/systemd/mount-rootfs/proc/,
lxc.apparmor.raw: mount options=(ro, nosuid, nodev, noexec, remount, nosymfollow, bind) -&gt; /run/systemd/mount-rootfs/run/credentials/systemd-networkd.service/,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/sys/net/,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/uptime,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/slabinfo,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/meminfo,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/swaps,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/loadavg,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/cpuinfo,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/diskstats,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/stat,
lxc.apparmor.raw: userns create,
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;Restart the container, and the errors should no longer occur.

&lt;p&gt;Don&#039;t try to place statements in /var/lib/lxc/&amp;lt;vmid&amp;gt;/config as it is over-written by ProxMox upon container startup.  Rules are appended to that configuration.

&lt;p&gt;I used the following for a trixie v13.3 version of a container:

&lt;blockquote&gt;&lt;pre&gt;
lxc.apparmor.raw: mount fstype=ramfs -&gt; /dev/shm/,
lxc.apparmor.raw: mount options=(ro, nosuid, nodev, noexec, remount, nosymfollow, bind) -&gt; /dev/shm/,
lxc.apparmor.raw: mount options=(ro, remount, bind) -&gt; /dev/,
lxc.apparmor.raw: mount options=(rw, move) -&gt; /dev/mqueue/,
lxc.apparmor.raw: mount options=(rw, move) -&gt; /run/lock/,
lxc.apparmor.raw: mount options=(rw, move) -&gt; /tmp/,
lxc.apparmor.raw: mount options=(ro, remount, noatime, bind) -&gt; /,
lxc.apparmor.raw: mount options=(ro, nosuid, nodev, noexec, remount, nosymfollow, bind) -&gt; /run/systemd/mount-rootfs/run/credentials/systemd-networkd.service/,
lxc.apparmor.raw: userns create,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec) -&gt; /run/systemd/namespace-{,**},
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/sys/net/,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/uptime,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/slabinfo,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/meminfo,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/swaps,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/loadavg,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/cpuinfo,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/diskstats,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec, remount, bind) -&gt; /run/systemd/mount-rootfs/proc/stat,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec) -&gt; /run/systemd/unit-root/proc/,
lxc.apparmor.raw: mount options=(ro, nosuid, nodev, noexec) -&gt; /sys/kernel/config/,
lxc.apparmor.raw: mount options=(rw, nosuid, nodev, noexec) -&gt; /sys/kernel/config/,
&lt;/pre&gt;&lt;/blockquote&gt;

 
    </content:encoded>

    <pubDate>Sat, 28 Feb 2026 23:51:54 +0000</pubDate>
    <guid isPermaLink="false">http://blog.raymond.burkholder.net/index.php?/archives/1338-guid.html</guid>
    
</item>
<item>
    <title>Python Virtual Environment</title>
    <link>http://blog.raymond.burkholder.net/index.php?/archives/1337-Python-Virtual-Environment.html</link>
            <category>Ansible</category>
            <category>Python</category>
    
    <comments>http://blog.raymond.burkholder.net/index.php?/archives/1337-Python-Virtual-Environment.html#comments</comments>
    <wfw:comment>http://blog.raymond.burkholder.net/wfwcomment.php?cid=1337</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>http://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=1337</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;&lt;a href=&quot;https://codesolid.com/pip-vs-pipenv-which-is-better-and-which-to-learn-first/&quot; target=_blank&gt;Pip vs Pipenv: Which is better and which to learn first&lt;/a&gt; compares &lt;a href=&quot;https://packages.debian.org/trixie/pipenv&quot; target=_blank&gt;pipenv&lt;/a&gt; vs the &lt;a href=&quot;https://packages.debian.org/trixie/python3-pip&quot; target=_blank&gt;python3-pip&lt;/a&gt; and &lt;a href=&quot;https://packages.debian.org/trixie/virtualenv&quot; target=_blank&gt;virtualenv&lt;/a&gt; package combo.

&lt;p&gt;After referring to that, I think I&#039;ll just stick with the standard pip/virtualenv combo for now.

&lt;p&gt;To get started:

&lt;blockquote&gt;&lt;pre&gt;
# install basic packages
apt-get install python3 python3-pip virtualenv  python3-venv git

# create a project directory - example ansible
python3 -m venv ansible

# activate the project
cd ansible
source bin/activate

# example installation of packages
pip install ansible
pip install argcomplete
activate-global-python-argcomplete
source ~/.bash_completion
ansible-config init --disabled &gt; ansible.cfg

# to deactivate the project
deactivate

# upgrade
python3 -m pip install --upgrade ansible
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;At some point, integrate &lt;a href=&quot;https://packages.debian.org/trixie/python3-ansible-runner&quot; target=_blank&gt;python3-ansible-runner&lt;/a&gt;, the &lt;a href=&quot;https://github.com/ansible/ansible-runner&quot; target=_blank&gt;github source&lt;/a&gt; and links to documentation. 
    </content:encoded>

    <pubDate>Sat, 28 Feb 2026 19:27:47 +0000</pubDate>
    <guid isPermaLink="false">http://blog.raymond.burkholder.net/index.php?/archives/1337-guid.html</guid>
    
</item>
<item>
    <title>LXC Fresh Container Construction From Scratch for Proxmox</title>
    <link>http://blog.raymond.burkholder.net/index.php?/archives/1335-LXC-Fresh-Container-Construction-From-Scratch-for-Proxmox.html</link>
            <category>Containers</category>
            <category>Debian</category>
            <category>LXC</category>
            <category>Proxmox</category>
    
    <comments>http://blog.raymond.burkholder.net/index.php?/archives/1335-LXC-Fresh-Container-Construction-From-Scratch-for-Proxmox.html#comments</comments>
    <wfw:comment>http://blog.raymond.burkholder.net/wfwcomment.php?cid=1335</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>http://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=1335</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;There are many articles available which discuss customizing a pre-existing Proxmox Container Template.  Few, if any, discuss constructing an LXC container from scratch.  Maybe because, fundamentally, a container template is just the rootfs as tarball, so building it is quite easy:

&lt;ul&gt;
  &lt;li&gt;Build a linux based virtual machine, I use Debian&#039;s recent release
  &lt;li&gt;Install LXC and its template package
  &lt;li&gt;Construct and initialize an LXC container
  &lt;li&gt;Shut it down and and zip it up
  &lt;li&gt;Copy it over to the ProxMox template directory
  &lt;/ul&gt;

&lt;p&gt;The details:

&lt;blockquote&gt;&lt;pre&gt;
# build the linux vm - details not relevant here
# ssh into the vm, or start a command line
# install basic packages

sudo apt install --no-install-recommends lxc lxc-templates xz-utils bridge-utils wget debootstrap rsync

# basic container templates are in:
#   /usr/share/lxc/templates/ 
# for debian as well as other distributions

# create an lxc container, provide a list any additional packages

lxc-create --template debian --name trixie-template -- --release trixie --packages iputils-ping,vim-tiny

# start and attach to the container
lxc-start trixie-template
lxc-attach trixie-template

# prepare for generating template
apt clean
apt purge

# Remove SSH host keys to ensure unique keys for each clone:
rm /etc/ssh/ssh_host_*

# Empty the machine ID file:
truncate -s 0 /etc/machine-id

# clear history
unset HISTFILE
# truncate history
history -c
&gt; ~/.bash_history
# the following has a space in front to prevent inclusion in the history
 shutdown -h now

# the shutdown returns to the virtual machine&#039;s prompt
# compress the directory structure

cd /var/lib/lxc/trixie-template/

# remove /dev files as they can&#039;t be created in an unprivileged container
# an example error message if not removed:
#   tar: ./rootfs/dev/urandom: Cannot mknod: Operation not permitted
# construction of a new container will re-create the directory and files

rm ./rootfs/dev/ptmx
rm ./rootfs/dev/zero
rm ./rootfs/dev/tty3
rm ./rootfs/dev/urandom
rm ./rootfs/dev/null
rm ./rootfs/dev/tty
rm ./rootfs/dev/console
rm ./rootfs/dev/tty4
rm ./rootfs/dev/tty2
rm ./rootfs/dev/random
rm ./rootfs/dev/tty1
rm ./rootfs/dev/full

# cd into rootfs and zip the container

cd rootfs
tar --xz --acls --numeric-owner -cf /var/local/trixie-13-3-template.tar.xz ./

# the xz file can be copied over to proxmox and placed into
# /var/lib/pve/local-btrfs/template/cache/
# for use as a template for container creation
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;During the first use of lxc-create to create the original container, packages are downloaded and installed to build the container.
The packages and installation is cached for faster subsequent builds of the same container type.

&lt;p&gt;If the cache becomes stale, it can be rebuilt by using --flush-cache in a manner similar to:

&lt;blockquote&gt;&lt;pre&gt;
lxc-create --template debian --name trixie-template -- --release trixie --flush-cache --packages iputils-ping,vim-tiny,less
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;An existing cache can be updated with something like:

&lt;blockquote&gt;&lt;pre&gt;
sudo chroot /var/cache/lxc/debian/rootfs-trixie-amd64
apt-get update
apt-get dist-upgrade
apt-get clean
exit
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;courtesy of &lt;a href=&quot;https://www.tomechangosubanana.com/2015/updating-lxc-imagecontainer-caches/&quot; target=_blank&gt;Updating lxc image/container caches&lt;/a&gt;

&lt;p&gt;One other note, there are two package candidates for installing the &lt;a href=&quot;https://unix.stackexchange.com/questions/400351/what-are-the-differences-between-iputils-ping-and-inetutils-ping&quot; target=_blank&gt;ping utility&lt;/a&gt;:

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://packages.debian.org/trixie/iputils-ping&quot; target=_blank&gt;iputils-ping&lt;/a&gt; - native Linux ping, preferred for Debian/Linux
  &lt;li&gt;&lt;a href=&quot;https://packages.debian.org/trixie/inetutils-ping&quot; target=_blank&gt;inetutils-ping&lt;/a&gt; - general gnu version, used on a variety of posix sytstems, less preferred
  &lt;/ul&gt;

&lt;p&gt;Some fix-ups in the process:

&lt;ul&gt;
  &lt;li&gt;apt-get install less
  &lt;li&gt;dpkg-reconfigure locales
  &lt;li&gt;useradd user
  &lt;/ul&gt;

 
    </content:encoded>

    <pubDate>Fri, 27 Feb 2026 21:03:19 +0000</pubDate>
    <guid isPermaLink="false">http://blog.raymond.burkholder.net/index.php?/archives/1335-guid.html</guid>
    
</item>
<item>
    <title>Sample Proxmox command to build LXC container from Template</title>
    <link>http://blog.raymond.burkholder.net/index.php?/archives/1336-Sample-Proxmox-command-to-build-LXC-container-from-Template.html</link>
            <category>Proxmox</category>
    
    <comments>http://blog.raymond.burkholder.net/index.php?/archives/1336-Sample-Proxmox-command-to-build-LXC-container-from-Template.html#comments</comments>
    <wfw:comment>http://blog.raymond.burkholder.net/wfwcomment.php?cid=1336</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>http://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=1336</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;blockquote&gt;&lt;pre&gt;
pct_id=101
pct_name=test01
pct create $pct_id /var/lib/pve/local-btrfs/template/cache/trixie-13-3-template.tar.xz  \
  -hostname $pct_name \
  -description &#039;demo build&#039; \
  -onboot 1 \
  -startup up=3 \
  -ostype debian \
  -arch amd64 \
  -cores 2 \
  -memory 1024 \
  -nameserver 10.10.10.10 -searchdomain &#039;example.com&#039; \
  -net0 name=vlan30,bridge=vmbr1,ip=dhcp,tag=30,type=veth \
  -rootfs local-btrfs:8,mountoptions=&quot;noatime;discard&quot; \
  -swap 512
&lt;/pre&gt;&lt;/blockquote&gt; 
    </content:encoded>

    <pubDate>Sat, 28 Feb 2026 04:40:48 +0000</pubDate>
    <guid isPermaLink="false">http://blog.raymond.burkholder.net/index.php?/archives/1336-guid.html</guid>
    
</item>
<item>
    <title>Migrating LXC Containers From One Machine To Another</title>
    <link>http://blog.raymond.burkholder.net/index.php?/archives/916-Migrating-LXC-Containers-From-One-Machine-To-Another.html</link>
            <category>LXC</category>
    
    <comments>http://blog.raymond.burkholder.net/index.php?/archives/916-Migrating-LXC-Containers-From-One-Machine-To-Another.html#comments</comments>
    <wfw:comment>http://blog.raymond.burkholder.net/wfwcomment.php?cid=916</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>http://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=916</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;For some machines with LXC containers, they have been running for a number of years.  I want to take the easy way out and move the containers from one physical machine to another.  At another time, I will rebuild the containers.

&lt;p&gt;Since I am running BTRFS subvolumes for each container, I could be using BTRFS snapshot/send/receive commands to migrate/copy/replicate subvolumes.  But before attempting that, I wanted to give the &#039;copy&#039; a try.  To do this properly, at the source, use the following -- with numeric-owner being a required paramenter -- command to collect the files:

&lt;blockquote&gt;&lt;pre&gt;
tar --numeric-owner -czvf mycontainer.tar.gz /var/lib/lxc/my_container
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;At the destination, expand that file out:

&lt;blockquote&gt;&lt;pre&gt;
tar --numeric-owner -xzvf mycontainer.tar.gz -C /var/lib/lxc/
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;The &lt;a href=&quot;http://lxc-users.linuxcontainers.narkive.com/ATkcbMOJ/what-is-right-way-to-backup-and-restore-linux-containers&quot; target=_blank&gt;lxc users mailing list&lt;/a&gt; and 
&lt;a href=&quot;https://stackoverflow.com/questions/23427129/how-do-i-backup-move-lxc-containers&quot; target=_blank&gt;Stack OverFlow&lt;/a&gt; were helpful.

&lt;p&gt;Other stuff to do:

&lt;ul&gt;
  &lt;li&gt;Read up on &lt;a href=&quot;http://man7.org/linux/man-pages/man7/cgroups.7.html&quot; target=_blank&gt;CGroups&lt;/a&gt; in the Linux Programmer&#039;s Manual
  &lt;/ul&gt;

&lt;p&gt;In migrating from a very old version of LXC to a much newer version of LXC, I was getting errors.  I needed to run a some debug to get a handle on errors:

&lt;blockquote&gt;&lt;pre&gt;
lxc-start -n container -F --logpriority=DEBUG --logfile log
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;I had errors along the lines of:

&lt;blockquote&gt;&lt;pre&gt;
Activating lvm and md swap...done.
Checking file systems...Segmentation fault (core dumped)
failed (code 139).
&lt;/pre&gt;&lt;/blockquote&gt;

&lt;p&gt;&lt;a href=&quot;https://serverfault.com/questions/896524/how-to-fix-filesystem-of-a-lxc-container&quot; target=_blank&gt;ServerFault&lt;/a&gt; had the solution: put &quot;vsyscall=emulate&quot; into /etc/default/grub, run &#039;update-grub&#039; and reboot.  Looks like I need to modernize my containers so I can eliminate this workaround, which may have some security considerations.  There is a &lt;a href=&quot;https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=891393&quot; target=_blank&gt;Debian Bug&lt;/a&gt; for this.

&lt;p&gt;&lt;a href=&quot;https://einsteinathome.org/content/vsyscall-now-disabled-latest-linux-distros&quot; target=_blank&gt;einstein home&lt;/a&gt; has a blog with some kernel references to the issue, in effect saying: &quot;vsyscall is now disabled on latest linux distros&quot;.  A lengthier LWN article at
&lt;a href=&quot;https://lwn.net/Articles/446528/&quot; target=_blank&gt;On vsyscalls and the vDSO&lt;/a&gt;.  This works with kernel 4.14, my current version, but I see somewhere else that the workaround is entirely removed in kernel 4.15, at least in the Arch world.  At &lt;a href=&quot;https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=847154&quot; target=_blank&gt;bug 847154&lt;/a&gt;: &quot;This breaks (e)glibc 2.13 and earlier&quot;.

&lt;p&gt;Note, see newer notes at &lt;a href=&quot;https://blog.raymond.burkholder.net/index.php?/archives/1335-LXC-Fresh-Container-Construction-From-Scratch-for-Proxmox.html&quot; target=_blank&gt;LXC Fresh Container Construction From Scratch for Proxmox&lt;/a&gt;. 
    </content:encoded>

    <pubDate>Wed, 04 Apr 2018 15:39:24 +0000</pubDate>
    <guid isPermaLink="false">http://blog.raymond.burkholder.net/index.php?/archives/916-guid.html</guid>
    
</item>
<item>
    <title>Debian Headers first then Kernel for DKMS rebuilds</title>
    <link>http://blog.raymond.burkholder.net/index.php?/archives/1334-Debian-Headers-first-then-Kernel-for-DKMS-rebuilds.html</link>
            <category>Debian</category>
    
    <comments>http://blog.raymond.burkholder.net/index.php?/archives/1334-Debian-Headers-first-then-Kernel-for-DKMS-rebuilds.html#comments</comments>
    <wfw:comment>http://blog.raymond.burkholder.net/wfwcomment.php?cid=1334</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>http://blog.raymond.burkholder.net/rss.php?version=2.0&amp;type=comments&amp;cid=1334</wfw:commentRss>
    

    <author>nospam@example.com (Raymond P. Burkholder)</author>
    <content:encoded>
    &lt;p&gt;Something from a Debian mailing list:

&lt;blockquote&gt;
&lt;p&gt;I found the root cause, when testing 6.12.57 I installed the image then the 
headers and the NVIDIA DKMS module was not rebuilt because the matching linux-
headers package was not installed at the time the kernel image was configured.

&lt;p&gt;If I install the headers first and then the linux-image package, DKMS correctly 
builds the NVIDIA module and 6.12.63 works fine, so it doesn&#039;t look like a 
kernel regression after all.

&lt;p&gt;I don&#039;t know if I should manually run dkms autoinstall myself after a kernel 
update  (I never had to before) or if there was a bug during the install 
process of this update.
&lt;/blockquote&gt;

&lt;p&gt;Makes sense, I had NVidia compile fail in a similar.  This makes it obvious what I should have observed.

 
    </content:encoded>

    <pubDate>Sat, 17 Jan 2026 03:39:34 +0000</pubDate>
    <guid isPermaLink="false">http://blog.raymond.burkholder.net/index.php?/archives/1334-guid.html</guid>
    
</item>

</channel>
</rss>
