- Linux Virtual Server - a highly scalable and highly available server built on a cluster of real servers, with the load balancer running on the Linux operating system (last date on the web page: 2012)
- Linux-VServer - provides virtualization for GNU/Linux systems. This is accomplished by kernel level isolation. It allows to run multiple virtual units at once. Those units are sufficiently isolated to guarantee the required security, but utilize available resources efficiently, as they run on the same kernel. (a precursor to LXC) (last mod 2018)
Saturday, March 23. 2024
Linux Good Old Stuff
Sunday, December 3. 2023
ls notes
ls -srSk ~/Downloads/
- s - size in blocks
- r - reverse order (largest at bottom)
- k - 1024 byte block size
- S - sort by size
ls -d ~/Downloads/*
- Shows path to all files in the directory
Monday, November 27. 2023
More eBPF
According to the slides from a 2023 Linux Storage, Filesystem, Memory-Management and BPF Summit talk, guests operating through the netkit device (which was called "meta" at that time) are able to attain TCP data-transmission rates that are just as high as can be had by running directly on the host. The performance penalty for running within a guest has, in other words, been entirely removed.
Saturday, November 11. 2023
Networks on Linux
Wednesday, July 19. 2023
SaltStack on Debian Bookworm
I found out the hard way that SaltStack and Debian no longer place nice together. I had upgraded a Debian installation from Bullseye to Bookworm, along with the resident Salt Minion. When attempting to use the minion, it no longer starts up, due to various imports no longer working. Which was due to the salt-minion not being upgraded. The error message would started this odyssey:
salt ImportError: cannot import name 'Markup' from 'jinja2'
Taking a look at the Debian Developer Information for Salt, the last version started in 'unstable' was 3004.1 back in December of 2022. This is now almost 8 months later and little or no movement. There was some mention in a ticket somewhere that Salt release cycles don't cater to Debian stable release cycles. Not sure if that is a legitimate reason or not, but, well, for whatever reason, SaltStack management in Debian is no longer a simple no brainer.
However, after a little digging, there is a way to run SaltStack versions 3006 (current as of this writing). It is simple to install on Bullseye, but not easily done on Bookworm.
On Bullseye (as root, or implies sudo):
# cd ~ # apt remove salt-minion salt-master # apt install curl # curl -L https://bootstrap.saltstack.com -o install_salt.sh # sh install_salt.sh -M onedir
The '-M' installs the salt master at the same time (for machines running master). If you forget to do that, you'll need to diagnose and fix the systemctl mask error with the following:
# apt install file # file /etc/systemd/system/salt-master.service # rm /etc/systemd/system/salt-master.service # systemctl daemon-reload # sh install_salt.sh -M onedir
The 'sh install_salt.sh -M onedir' should show a symlink to /dev/nul, which the 'rm ...' will fix.
On Bookworm, the bootstrap isn't scheduled to work till beginning of 2024 sometime I think with Salt 3007 or 3008 -- more info in [FEATURE REQUEST] Add Salt support for Debian 12 #64223 .
In the meantime, I had to cheat a bit:
- in /etc/debian_version, change 12.0 to 11.0
- in /etc/apt/sources.list, change bookworm to bullseye
- rm /etc/apt/sources.list.d/salt.list
- run apt update
- run the commands listed above for installing the one or both the salt services
- restore /etc/debian_version and /etc/apt/sources.list to their original content
I'm sure there are more elegant ways of doing this, but this worked to fake the needed version 11 in the installation script and directory traversal requirements
Note, more info on the Salt Install/Bootstrap Process.
Saturday, July 1. 2023
Linux: recover a rm'd file still open
I had an application running which had an open file it was actively using.
I accidentally performed an rm (remove) on the file rather than another I actually meant to remove.
Due to Linux's method of linking files, even though the directory entry link to the file was removed, the file is still open and has an additional link via the application process pseudo directory.
The process sub-directory is composed of the process id. The process id can be found with something like:
$ pidof BasketTrading 2663042
The deleted file can then be found with:
$ lsof -p 2663042 | grep deleted BasketTra 2663042 rpb 18u REG 0,46 2739200 29438986 /home/.../BasketTrading.db (deleted)
The 18u reflects the file descriptor used for the file. This can be used to perform a simple copy of the file to an alternate location. A link does not seem to fix it. You may want to complete any outstanding writes to the file first. But do not close it or the application. If you do, the file will be unrecoverable.
$ ls /proc/2663042/fd/18 /proc/2663042/fd/18
$ cp /proc/2663042/fd/18 /home/.../BasketTrading.db.rescue
Sunday, June 25. 2023
Kiosk Recovery
Here is an interesting note from Connor's Blog - an automatic reboot after kernel panic with the following sysctl setting:
$ sysctl -w kernel.panic=60
... or on the kernel command line:
The kernel reboots after 60 seconds. It might start a doom loop, but it is a potential way to get a remote system back up after a failure.panic=60
Tegra TK1
Yes, this is a very old device (back to 2017 or so). But I have a couple, and I need to keep a few reference notes to see if it is upgradeable.
- Linux for Tegra R21.8 found from Jetson Linux Archive -- shows a 3.10.40 version kernel with a 32 bit driver package -- pretty old, if only 32 bit.
- Installing Debian on nvidia Jetson TK1 - based on the Tegra K1 chip (also known as Tegra 124). The Tegra K1 (codenamed "Logan") features a quad-core 32-bit ARM Cortex-A15 CPU and Kepler GPU (GK20A) with 192 CUDA cores. The Jetson TK1 can run Debian's armhf port.
Friday, June 23. 2023
systemd-networkd
References
- systemd-networkd - from archlinux, with reference to systemd-nspawn for container networking. -- interesting note: "it is possible to run Docker containers inside an unprivileged systemd-nspawn container with cgroups v2 enabled". But from an old talk:
- Creating containers with systemd-nspawn - It is targeted at "building, testing, debugging, and profiling", not at deployment. systemd-nspawn uses the same kernel APIs that the other two tools use, but is not a competitor to them because it is not targeted at running in a production environment.
NetworkManager - error - 'device is strictly unmanaged'
Gone are the good 'ole days of using /etc/network/interfaces to manage basic networking stack configurations. There seems to be an explosion of alternate ways, each stepping on each other's toes: /etc/network/interfaces, NetworkManager, NetPlan, systemd-networking, ...
A problem I had the other day was where a new installation of an espressobin had NetworkManager installed, no NetPlan, and some stuff in /etc/network/interfaces. Unfortunately, since NetworkManager controls the dhcp-client, the /etc/network/interfaces interface was not obtaining an address.
The solution was to edit /etc/NetworkManager/conf.d/10-ignore-interfaces.conf, and comment the following line:
[keyfile] #unmanaged-devices=interface-name:eth*,interface-name:wan*,interface-name:lan*,interface-name:br*
After 'sudo service NetworkManager restart', this solves the error of NetworkManager not being able to manage strictly unmanaged interfaces, which for the espressobin, are lan0, lan1, wan.
Some commands for NetworkManager:
- nmcli
- nmcli device show
- nmcli connection show
File locations:
- /etc/NetworkManager/system-connections/ - interface configurations
- /usr/share/doc/network-manager/README.Debian - notes about managed/unmanaged devices
Some references:
- Debian NetworkManager - mostly desktop?
- Debian NetworkConfiguration - nostly server?
- NetworkManager homepage
- nmcli - command-line tool for controlling NetworkManager
- NetworkManager.conf - NetworkManager configuration file
- NetworkManager - archlinux view of NetworkManager, with a section on VPN connectivity based upon profiles
Wednesday, June 7. 2023
dbus monitoring
It is always interesting looking at bug reports. They uncover interesting nooks and crannies of system tooling. In reading Debian Bug#1037194, there is a command for watching dbus:
$ sudo dbus-monitor --system
Thursday, June 1. 2023
Configuration for a Simple systemd Service
To keep locally generated systemd service files separate from distribution files (as an example):
$ sudo mkdir -p /usr/local/lib/systemd/system $ sudo vim /usr/local/lib/systemd/system/bme680.service $ sudo systemctl enable bme680 $ sudo systemctl daemon-reload $ sudo systemctl start bme680
An example systemd service file in /etc/systemd/system might look like:
[Unit] Description=BME680 Collector Documentation=https://github.com/rburkholder/bme680 After=network.target [Service] Type=simple User=debian WorkingDirectory=/home/debian Environment="ID=02" "LOCATION=top floor" ExecStart=/home/debian/bme680/build/bme680-mqtt ${ID} ${LOCATION} ExecReload=kill -HUP $MAINPID KillMode=process Restart=on-failure RestartSec=5s StandardOutput=null # note change with logrotate implemented, or to a memory file [Install] WantedBy=multi-user.target
The example from ecowitt2mqtt got me started - is a small CLI/web server that can receive data from Fine Offset weather stations (and their numerous white-labeled counterparts, like Ecowitt and Ambient Weather), adjust that data in numerous ways, and send it on to one or more MQTT brokers.
Reference to:
- systemd.service — Service unit configuration
- systemd.exec — Execution environment configuration - has list of expanded exit codes
Sunday, May 14. 2023
dd with ongoing commit
From a debian-boot mailing list:
A couple more options may help: it's sometimes possible that you've extracted the USB stick while files were still writing.
sudo dd \ if=/debian-11.7.0-amd64-netinst.iso \ of=/dev/sda \ bs=1M oflag=dsync status=progressThis forces a sync on each write to the usb so you don't lose data and also gives you a brief status output on the size written.
Monday, March 27. 2023
dmesg parameters
Learn something, a parameter for dmesg to look for certain types of messages:
root@host02:~# dmesg --level err [ 0.239659] ACPI: SPCR: Unexpected SPCR Access Width. Defaulting to byte size [ 2.676888] mpt2sas_cm0: overriding NVDATA EEDPTagMode setting [ 11.513804] ACPI Error: No handler for Region [SYSI] (00000000680cfabd) [IPMI] (20210730/evregion-130) [ 11.513921] ACPI Error: Region IPMI (ID=7) has no handler (20210730/exfldio-261) [ 11.514031] ACPI Error: Aborting method \_SB.PMI0._GHL due to previous error (AE_NOT_EXIST) (20210730/psparse-529) [ 11.514168] ACPI Error: Aborting method \_SB.PMI0._PMC due to previous error (AE_NOT_EXIST) (20210730/psparse-529)
Man page has various level categories:
- subsys - The message sub-system prefix (e.g., "ACPI:").
- time - The message timestamp.
- timebreak - The message timestamp in short ctime format in --reltime or --human output.
- alert - The text of the message with the alert log priority.
- crit - The text of the message with the critical log priority.
- err - The text of the message with the error log priority.
- warn - The text of the message with the warning log priority.
- segfault - The text of the message that inform about segmentation fault.
Tuesday, March 7. 2023
Linux Network Diagnostic
Obtaining interface list, which can be used to drill down into details:
$ ls /sys/class/net/ br0 eno1 enp5s0 lo lxcbr0 veth-tf64-v90 vlan90 vlan90_br0 wlp6s0 rpb@nuc:~/data/passwords$ ls /sys/class/net/enp5s0 addr_assign_type carrier dev_id gro_flush_timeout master operstate proto_down testing upper_br0 address carrier_changes dev_port ifalias mtu phys_port_id queues threaded addr_len carrier_down_count dormant ifindex name_assign_type phys_port_name speed tx_queue_len broadcast carrier_up_count duplex iflink napi_defer_hard_irqs phys_switch_id statistics type brport device flags link_mode netdev_group power subsystem uevent
Provides some descriptive details and places of interest:
$ sudo udevadm test-builtin net_id /sys/class/net/enp5s0 Trying to open "/etc/systemd/hwdb/hwdb.bin"... Trying to open "/etc/udev/hwdb.bin"... Trying to open "/usr/lib/systemd/hwdb/hwdb.bin"... Trying to open "/lib/systemd/hwdb/hwdb.bin"... Trying to open "/lib/udev/hwdb.bin"... === trie on-disk === tool version: 252 file size: 12198286 bytes header size 80 bytes strings 2478998 bytes nodes 9719208 bytes Loading kernel module index. Found cgroup2 on /sys/fs/cgroup/, full unified hierarchy Found container virtualization none. Using default interface naming scheme 'v252'. Parsed configuration file "/usr/lib/systemd/network/99-default.link" Parsed configuration file "/usr/lib/systemd/network/73-usb-net-by-mac.link" Created link configuration context. ID_NET_NAMING_SCHEME=v252 ID_NET_NAME_MAC=enx54b2030473fa enp5s0: MAC address identifier: hw_addr=54:b2:03:04:73:fa → x54b2030473fa ID_OUI_FROM_DATABASE=PEGATRON CORPORATION sd-device: Failed to chase symlinks in "/sys/devices/pci0000:00/0000:00:1c.1/0000:05:00.0/of_node". sd-device: Failed to chase symlinks in "/sys/devices/pci0000:00/0000:00:1c.1/0000:05:00.0/physfn". enp5s0: Parsing slot information from PCI device sysname "0000:05:00.0": success enp5s0: dev_port=0 enp5s0: PCI path identifier: domain=0 bus=5 slot=0 func=0 phys_port= dev_port=0 → p5s0 ID_NET_NAME_PATH=enp5s0 Unload kernel module index. Unloaded link configuration context.
The raw details contributing to the previous data:
$ udevadm info /sys/class/net/enp5s0 P: /devices/pci0000:00/0000:00:1c.1/0000:05:00.0/net/enp5s0 M: enp5s0 R: 0 U: net I: 2 E: DEVPATH=/devices/pci0000:00/0000:00:1c.1/0000:05:00.0/net/enp5s0 E: SUBSYSTEM=net E: INTERFACE=enp5s0 E: IFINDEX=2 E: USEC_INITIALIZED=5978173 E: ID_NET_NAMING_SCHEME=v252 E: ID_NET_NAME_MAC=enx54b2030473fa E: ID_OUI_FROM_DATABASE=PEGATRON CORPORATION E: ID_NET_NAME_PATH=enp5s0 E: ID_BUS=pci E: ID_VENDOR_ID=0x8086 E: ID_MODEL_ID=0x157b E: ID_PCI_CLASS_FROM_DATABASE=Network controller E: ID_PCI_SUBCLASS_FROM_DATABASE=Ethernet controller E: ID_VENDOR_FROM_DATABASE=Intel Corporation E: ID_MODEL_FROM_DATABASE=I210 Gigabit Network Connection E: ID_PATH=pci-0000:05:00.0 E: ID_PATH_TAG=pci-0000_05_00_0 E: ID_NET_DRIVER=igb E: ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link E: ID_NET_NAME=enp5s0 E: SYSTEMD_ALIAS=/sys/subsystem/net/devices/enp5s0 E: TAGS=:systemd: E: CURRENT_TAGS=:systemd: