Wireless communication provides great advantages that are not available through their wired counterparts such as flexibility, ease of deployment and use, cost reductions, and convenience. Wireless multi-hop networks (WMN) do not have any centralized management infrastructure. Wireless multi-hop networks have many benefits since proposed. In such networks when a node wants to send a packet to a destination where is not in the transmission range, depend on some intermediate nodes. In this type of networks packet sending is in the form of multiple hop until destination and this work is dynamic. Lack of centralized management cause that some nodes show malicious function. Malicious nodes are that receive packets and drop them maliciously. These malicious nodes could have many reasons such as hardware failure, software failure or lack of power. Such nodes make multiple packets drop from the network and the performance of network strongly decreases. As a result, the throughput of the network decrease, increase end-to-end delay and increase overhead. Therefore, we must aware from presence of malicious node in the network and do routing based on this awareness. Therefore, this paper aims to study and review the present malicious node detection methods that proposed in literatures. We categorized networks in groups, including ad hoc networks, MANET, DTN, Opportunistic networks, WSN, VANET and other wireless networks and compare malicious node detection met
Tuesday, June 10. 2025
Multi Hop Wireless
Sunday, June 8. 2025
kwin_wayland_wrapper[<pid>]: kwin_scene_opengl: 0x2: GL_INVALID_OPERATION in glDrawBuffers(unsupported buffer GL_BACK_LEFT)
Some notes in trying to eliminate the brunt of the log level spamming. The entries seem to occur primarily when switching virtual desktops. And also when alt-tabbing between windows.
Current logging level can be determined with (this is after I made changes, some lines have been removed for succinctness):
root@x670e:/home/rpb# systemctl --user show plasma-kwin_wayland | grep -i log SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=5 DropInPaths=/usr/lib/systemd/user/plasma-kwin_wayland.service.d/log.conf
A change to the logging level can be made with (doesn't seem to totally work. but was worth a try):
mkdir /usr/lib/systemd/user/plasma-kwin_wayland.service.d cat > /usr/lib/systemd/user/plasma-kwin_wayland.service.d/log.conf << EOF [Service] LogLevelMax=warning StandardOutput=null EOF systemctl daemon-reload systemctl --user daemon-reload # optional, close all work, as this resets the gui session: systemctl --user restart plasma-kwin_wayland
Hints of a solution are at kwin_wayland spamming journal with GPU info.
Check to see if the file was included with:
systemctl --user cat plasma-kwin_wayland
Logging destination can be checked with:
Continue reading "kwin_wayland_wrapper[<pid>]:..." »systemctl --user show plasma-kwin_wayland | grep Standard StandardInput=null StandardOutput=journal StandardError=inherit
rtkit-daemon[<pid>]: Supervising <m> threads of <n> processes of <y> users.
Package: rtkit (0.13-5.1 and others) describes trkit-daemon as:
RealtimeKit is a D-Bus system service that changes the scheduling policy of user processes/threads to SCHED_RR (i.e. realtime scheduling mode) on request. It is intended to be used as a secure mechanism to allow real-time scheduling to be used by normal user processes.
The control file is located at /usr/lib/systemd/system/rtkit-daemon.service. Additional configurations can be supplied via .conf files in /usr/lib/systemd/system/rtkit-daemon.service.d.
In Debian, logging for rtkit-daemon has not been specifically specified.
Based upon notes at Stop rtkit-daemon from spamming logs with "Supervising X threads of Y processes of Z users", the solution is simply to set the logging level.
mkdir /usr/lib/systemd/system/rtkit-daemon.service.d cat > /usr/lib/systemd/system/rtkit-daemon.service.d/log.conf << EOF [Service] LogLevelMax=info EOF systemctl daemon-reload systemctl restart rtkit-daemon.service
Since the message is of level debuginfo, logging level info should eliminate most of the spam.
Logging level is a numeric value based upon the following list:
- 0 or emergency, (highest priority messages)
- 1 or alert,
- 2 or critical,
- 3 or error,
- 4 or warning,
- 5 or notice,
- 6 or info
- 7 or debuginfo (lowest priority messages)
Note: if no loglevel is specified in whatever systemd service .conf file, the loglevel of the daemon defaults to 7, in other words allowing the highest level of verbosity (which confirms my earlier statement). More details can be founrd in systemd.exec — Execution environment configuration.
There is an outstanding bug in Debian. Continue reading "rtkit-daemon[<pid>]: Supervising..." »
Automation - By Intent
When starting out in automating processes and functions in infrastructure management and software deployment, typically much time is spent in putting the nuts and bolts of the system together through declarative commands from such tools as Ansible or Terraform.
Once typical patterns have been encountered and documented, an additional level of abstraction can be used to manage the intent of these deployments. Rather than building single-purpose scripts to handle individual elements, these purpose built patterns can be aggregated (like subroutines) to start to assemble and configure multiple integrated components of infrastructure.
These assemblies can then be used to ensure that test scenarios are consistent across pre-prod environments such as development, quality assurance, performance optimization and on into production.
High level abstraction designs become the defacto documentation for how something is built.
Network Automation Is More than Just Ansible
Start at the Beginning-Automating Network Design: Claudia de Luna’s AutoCon3 Opening Keynote Summary
Saturday, June 7. 2025
Linux - In The Weeds
Some tabs I've had open for too many years, need to clean up, but thought important enough that I might come back to them.
- Linux sysctl - tutorial shows where some of the most used and quoted sysctl/network parameters are located into the Linux network flow
- A deep dive into Linux namespaces, part 4 (ifeanyi.co) - A deep dive into Linux namespaces - in multiple parts - A Linux namespace is an abstraction over resources in the operating system. We can think of a namespace as a box. Inside this box are these system resources, which ones exactly depend on the box’s (namespace’s) type. There are currently 7 types of namespaces Cgroup, IPC, Network, Mount, PID, User, UTS.
- Making containers safer - different kernel mechanisms that can be used to make containers more secure and provided some recommendations
- How To Show Available WiFi Networks, Their Channels, Signal Strength And More From The Command Line - starting point: How to Show Available WiFi Networks on Linux from the Command Line (linuxuprising.com) - eg
nmcli -f ALL dev wifi
horst
- Hacking Reolink cameras for fun and profit - binwalk, strace, wireshark, ghidra, gdb, busybox, baichuan protocol, ...
- Reverse engineering my router's firmware with binwalk (embeddedbits.org) - hackernews
- Firefox Multi-Account Containers
- I2C in a Nutshell (memfault.com) - hackernews, I2C in a Nutshell - content
- This is why I use ad blockers and a pi-hole server” (twitter.com/poa_nyc) - hackernews
- lobster network summary
- Linux Performance: Almost Always Add Swap Space – Part 2: ZRAM
- how Linux keeps time - and how to use it
- OBS Studio: Open-source software for video recording and live streaming (obsproject.com) - hackernews
- Intel Virtualisation: How VT-X, KVM and QEMU Work Together (binarydebt.wordpress.com) - hackernews, Binary Debt
Thursday, June 5. 2025
Deployment Pipelines
Continuous Integration and Continuous Deployment (CI/CD) pipelines are pivotal to modern software engineering, yet diagnosing and resolving their failures remains a complex and labor-intensive challenge. In this paper, we present LogSage, the first end-to-end LLM-powered framework that performs root cause analysis and solution generation from failed CI/CD pipeline logs. During the root cause analysis stage, LogSage employs a specialized log preprocessing pipeline tailored for LLMs, which extracts critical error logs and eliminates noise to enhance the precision of LLM-driven root cause analysis. In the solution generation stage, LogSage leverages RAG to integrate historical resolution strategies and utilizes tool-calling to deliver actionable, automated fixes. We evaluated the root cause analysis stage using a newly curated open-source dataset, achieving 98\% in precision and 12\% improvement over naively designed LLM-based log analysis baselines, while attaining near-perfect recall. The end-to-end system was rigorously validated in a large-scale industrial CI/CD environment of production quality, processing more than 3,000 executions daily and accumulating more than 1.07 million executions in its first year of deployment, with end-to-end precision exceeding 88\%. These two forms of evaluation confirm that LogSage providing a scalable and practical solution to manage CI/CD pipeline failures in real-world DevOps workflows.
ML Meets the Hand
Grasp2Grasp: Vision-Based Dexterous Grasp Translation via Schrodinger Bridges
We propose a new approach to vision-based dexterous grasp translation, which aims to transfer grasp intent across robotic hands with differing morphologies. Given a visual observation of a source hand grasping an object, our goal is to synthesize a functionally equivalent grasp for a target hand without requiring paired demonstrations or hand-specific simulations. We frame this problem as a stochastic transport between grasp distributions using the Schr\"odinger Bridge formalism. Our method learns to map between source and target latent grasp spaces via score and flow matching, conditioned on visual observations. To guide this translation, we introduce physics-informed cost functions that encode alignment in base pose, contact maps, wrench space, and manipulability. Experiments across diverse hand-object pairs demonstrate our approach generates stable, physically grounded grasps with strong generalization. This work enables semantic grasp transfer for heterogeneous manipulators and bridges vision-based grasping with probabilistic generative modeling.
Wednesday, June 4. 2025
Every Day Applications
The fundamental information-theoretic limits of covert, or low probability of detection (LPD), communication have been extensively studied for over a decade, resulting in the square root law (SRL): only $L\sqrt{n}$ covert bits can be reliably transmitted over time-bandwidth product $n$, for constant $L>0$. Transmitting more either results in detection or decoding errors. The SRL imposes significant constraints on hardware realization of provably-secure covert communication. Thus, experimental validation of covert communication is underexplored: to date, only two experimental studies of SRL-based covert communication are available, both focusing on optical channels. Here, we report our initial results demonstrating the provably-secure covert radio-frequency (RF) communication using software-defined radios (SDRs). These validate theoretical predictions, open practical avenues for implementing covert communication systems, as well as raise future research questions.Automatic Stage Lighting Control: Is it a Rule-Driven Process or Generative Task?
Stage lighting plays an essential role in live music performances, influencing the engaging experience of both musicians and audiences. Given the high costs associated with hiring or training professional lighting engineers, Automatic Stage Lighting Control (ASLC) has gained increasing attention. However, most existing approaches only classify music into limited categories and map them to predefined light patterns, resulting in formulaic and monotonous outcomes that lack rationality. To address this issue, this paper presents an end-to-end solution that directly learns from experienced lighting engineers -- Skip-BART. To the best of our knowledge, this is the first work to conceptualize ASLC as a generative task rather than merely a classification problem. Our method modifies the BART model to take audio music as input and produce light hue and value (intensity) as output, incorporating a novel skip connection mechanism to enhance the relationship between music and light within the frame grid.We validate our method through both quantitative analysis and an human evaluation, demonstrating that Skip-BART outperforms conventional rule-based methods across all evaluation metrics and shows only a limited gap compared to real lighting engineers.Specifically, our method yields a p-value of 0.72 in a statistical comparison based on human evaluations with human lighting engineers, suggesting that the proposed approach closely matches human lighting engineering performance. To support further research, we have made our self-collected dataset, code, and trained model parameters available at github: Skip-BART
This paper presents a real-time transaction monitoring framework that integrates graph-based modeling, narrative field embedding, and generative explanation to support automated financial compliance. The system constructs dynamic transaction graphs, extracts structural and contextual features, and classifies suspicious behavior using a graph neural network. A retrieval-augmented generation module generates natural language explanations aligned with regulatory clauses for each flagged transaction. Experiments conducted on a simulated stream of financial data show that the proposed method achieves superior results, with 98.2% F1-score, 97.8% precision, and 97.0% recall. Expert evaluation further confirms the quality and interpretability of generated justifications. The findings demonstrate the potential of combining graph intelligence and generative models to support explainable, audit-ready compliance in high-risk financial environments.
Anomaly Detection
Cluster-Aware Causal Mixer for Online Anomaly Detection in Multivariate Time Series
Early and accurate detection of anomalies in time series data is critical, given the significant risks associated with false or missed detections. While MLP-based mixer models have shown promise in time series analysis, they lack a causality mechanism to preserve temporal dependencies inherent in the system. Moreover, real-world multivariate time series often contain numerous channels with diverse inter-channel correlations. A single embedding mechanism for all channels does not effectively capture these complex relationships. To address these challenges, we propose a novel cluster-aware causal mixer to effectively detect anomalies in multivariate time series. Our model groups channels into clusters based on their correlations, with each cluster processed through a dedicated embedding layer. In addition, we introduce a causal mixer in our model, which mixes the information while maintaining causality. Furthermore, we present an anomaly detection framework that accumulates the anomaly evidence over time to prevent false positives due to nominal outliers. Our proposed model operates in an online fashion, making it suitable for real-time time-series anomaly detection tasks. Experimental evaluations across six public benchmark datasets demonstrate that our model consistently achieves superior F1 scores.
Saturday, May 31. 2025
Trading Infrastructure
As securities trading systems transition to a microservices architecture, optimizing system performance presents challenges such as inefficient resource scheduling and high service response delays. Existing container orchestration platforms lack tailored performance optimization mechanisms for trading scenarios, making it difficult to meet the stringent 50ms response time requirement imposed by exchanges. This paper introduces SealOS+, a Sealos-based performance optimization approach for securities trading, incorporating an adaptive resource scheduling algorithm leveraging deep reinforcement learning, a three-level caching mechanism for trading operations, and a Long Short-Term Memory (LSTM) based load prediction model. Real-world deployment at a securities exchange demonstrates that the optimized system achieves an average CPU utilization of 78\%, reduces transaction response time to 105ms, and reaches a peak processing capacity of 15,000 transactions per second, effectively meeting the rigorous performance and reliability demands of securities trading.
DNS
Domainator: Detecting and Identifying DNS-Tunneling Malware Using Metadata Sequences
In recent years, malware with tunneling (or: covert channel) capabilities is on the rise. While malware research led to several methods and innovations, the detection and differentiation of malware solely based on its DNS tunneling features is still in its infancy. Moreover, no work so far has used the DNS tunneling traffic to gain knowledge over the current actions taken by the malware. In this paper, we present Domainator, an approach to detect and differentiate state-of-the-art malware and DNS tunneling tools without relying on trivial (but quickly altered) features such as "magic bytes" that are embedded into subdomains. Instead, we apply an analysis of sequential patterns to identify specific types of malware. We evaluate our approach with 7 different malware samples and tunneling tools and can identify the particular malware based on its DNS traffic. We further infer the rough behavior of the particular malware through its DNS tunneling artifacts. Finally, we compare our Domainator with related methods.
Agriculture
Amid the challenges posed by global population growth and climate change, traditional agricultural Internet of Things (IoT) systems is currently undergoing a significant digital transformation to facilitate efficient big data processing. While smart agriculture utilizes artificial intelligence (AI) technologies to enable precise control, it still encounters significant challenges, including excessive reliance on agricultural expert knowledge, difficulties in fusing multimodal data, poor adaptability to dynamic environments, and bottlenecks in real-time decision-making at the edge. Large language models (LLMs), with their exceptional capabilities in knowledge acquisition and semantic understanding, provide a promising solution to address these challenges. To this end, we propose Farm-LightSeek, an edge-centric multimodal agricultural IoT data analytics framework that integrates LLMs with edge computing. This framework collects real-time farmland multi-source data (images, weather, geographic information) via sensors, performs cross-modal reasoning and disease detection at edge nodes, conducts low-latency management decisions, and enables cloud collaboration for model updates. The main innovations of Farm-LightSeek include: (1) an agricultural "perception-decision-action" closed-loop architecture; (2) cross-modal adaptive monitoring; and (3)a lightweight LLM deployment strategy balancing performance and efficiency. Experiments conducted on two real-world datasets demonstrate that Farm-LightSeek consistently achieves reliable performance in mission-critical tasks, even under the limitations of edge computing resources. This work advances intelligent real-time agricultural solutions and highlights the potential for deeper integration of agricultural IoT with LLMs.
Olfactory Inertial Odometry: Methodology for Effective Robot Navigation by Scent
Olfactory navigation is one of the most primitive mechanisms of exploration used by organisms. Navigation by machine olfaction (artificial smell) is a very difficult task to both simulate and solve. With this work, we define olfactory inertial odometry (OIO), a framework for using inertial kinematics, and fast-sampling olfaction sensors to enable navigation by scent analogous to visual inertial odometry (VIO). We establish how principles from SLAM and VIO can be extrapolated to olfaction to enable real-world robotic tasks. We demonstrate OIO with three different odour localization algorithms on a real 5-DoF robot arm over an odour-tracking scenario that resembles real applications in agriculture and food quality control. Our results indicate success in establishing a baseline framework for OIO from which other research in olfactory navigation can build, and we note performance enhancements that can be made to address more complex tasks in the future.
Learning to See More: UAS-Guided Super-Resolution of Satellite Imagery for Precision Agriculture
Unmanned Aircraft Systems (UAS) and satellites are key data sources for precision agriculture, yet each presents trade-offs. Satellite data offer broad spatial, temporal, and spectral coverage but lack the resolution needed for many precision farming applications, while UAS provide high spatial detail but are limited by coverage and cost, especially for hyperspectral data. This study presents a novel framework that fuses satellite and UAS imagery using super-resolution methods. By integrating data across spatial, spectral, and temporal domains, we leverage the strengths of both platforms cost-effectively. We use estimation of cover crop biomass and nitrogen (N) as a case study to evaluate our approach. By spectrally extending UAS RGB data to the vegetation red edge and near-infrared regions, we generate high-resolution Sentinel-2 imagery and improve biomass and N estimation accuracy by 18% and 31%, respectively. Our results show that UAS data need only be collected from a subset of fields and time points. Farmers can then 1) enhance the spectral detail of UAS RGB imagery; 2) increase the spatial resolution by using satellite data; and 3) extend these enhancements spatially and across the growing season at the frequency of the satellite flights. Our SRCNN-based spectral extension model shows considerable promise for model transferability over other cropping systems in the Upper and Lower Chesapeake Bay regions. Additionally, it remains effective even when cloud-free satellite data are unavailable, relying solely on the UAS RGB input. The spatial extension model produces better biomass and N predictions than models built on raw UAS RGB images. Once trained with targeted UAS RGB data, the spatial extension model allows farmers to stop repeated UAS flights. While we introduce super-resolution advances, the core contribution is a lightweight and scalable system for affordable on-farm use.
Leaps in Thought
Memorization to Generalization: Emergence of Diffusion Models from Associative Memory
Hopfield networks are associative memory (AM) systems, designed for storing and retrieving patterns as local minima of an energy landscape. In the classical Hopfield model, an interesting phenomenon occurs when the amount of training data reaches its critical memory load $- spurious\,\,states$, or unintended stable points, emerge at the end of the retrieval dynamics, leading to incorrect recall. In this work, we examine diffusion models, commonly used in generative modeling, from the perspective of AMs. The training phase of diffusion model is conceptualized as memory encoding (training data is stored in the memory). The generation phase is viewed as an attempt of memory retrieval. In the small data regime the diffusion model exhibits a strong memorization phase, where the network creates distinct basins of attraction around each sample in the training set, akin to the Hopfield model below the critical memory load. In the large data regime, a different phase appears where an increase in the size of the training set fosters the creation of new attractor states that correspond to manifolds of the generated samples. Spurious states appear at the boundary of this transition and correspond to emergent attractor states, which are absent in the training set, but, at the same time, have distinct basins of attraction around them. Our findings provide: a novel perspective on the memorization-generalization phenomenon in diffusion models via the lens of AMs, theoretical prediction of existence of spurious states, empirical validation of this prediction in commonly-used diffusion models.
Simulating What Didn't Happen
Simulating the Unseen: Crash Prediction Must Learn from What Did Not Happen
Traffic safety science has long been hindered by a fundamental data paradox: the crashes we most wish to prevent are precisely those events we rarely observe. Existing crash-frequency models and surrogate safety metrics rely heavily on sparse, noisy, and under-reported records, while even sophisticated, high-fidelity simulations undersample the long-tailed situations that trigger catastrophic outcomes such as fatalities. We argue that the path to achieving Vision Zero, i.e., the complete elimination of traffic fatalities and severe injuries, requires a paradigm shift from traditional crash-only learning to a new form of counterfactual safety learning: reasoning not only about what happened, but also about the vast set of plausible yet perilous scenarios that could have happened under slightly different circumstances. To operationalize this shift, our proposed agenda bridges macro to micro. Guided by crash-rate priors, generative scene engines, diverse driver models, and causal learning, near-miss events are synthesized and explained. A crash-focused digital twin testbed links micro scenes to macro patterns, while a multi-objective validator ensures that simulations maintain statistical realism. This pipeline transforms sparse crash data into rich signals for crash prediction, enabling the stress-testing of vehicles, roads, and policies before deployment. By learning from crashes that almost happened, we can shift traffic safety from reactive forensics to proactive prevention, advancing Vision Zero.
Wednesday, May 28. 2025
Continual / Continuous / Continuity
A Continual Offline Reinforcement Learning Benchmark for Navigation Tasks
Autonomous agents operating in domains such as robotics or video game simulations must adapt to changing tasks without forgetting about the previous ones. This process called Continual Reinforcement Learning poses non-trivial difficulties, from preventing catastrophic forgetting to ensuring the scalability of the approaches considered. Building on recent advances, we introduce a benchmark providing a suite of video-game navigation scenarios, thus filling a gap in the literature and capturing key challenges : catastrophic forgetting, task adaptation, and memory efficiency. We define a set of various tasks and datasets, evaluation protocols, and metrics to assess the performance of algorithms, including state-of-the-art baselines. Our benchmark is designed not only to foster reproducible research and to accelerate progress in continual reinforcement learning for gaming, but also to provide a reproducible framework for production pipelines -- helping practitioners to identify and to apply effective approaches.
Continual Reinforcement Learning
Continual reinforcement learning (RL) concerns agents that are expected to learn continually, rather than converge to a policy that is then fixed for evaluation. Such an approach is well suited to environments the agent perceives as changing, which renders any static policy ineffective over time. The few simulators explicitly designed for empirical research in continual RL are often limited in scope or complexity, and it is now common for researchers to modify episodic RL environments by artificially incorporating abrupt task changes during interaction. In this paper, we introduce AgarCL, a research platform for continual RL that allows for a progression of increasingly sophisticated behaviour. AgarCL is based on the game Agar.io, a non-episodic, high-dimensional problem featuring stochastic, ever-evolving dynamics, continuous actions, and partial observability. Additionally, we provide benchmark results reporting the performance of DQN, PPO, and SAC in both the primary, challenging continual RL problem, and across a suite of smaller tasks within AgarCL, each of which isolates aspects of the full environment and allow us to characterize the challenges posed by different aspects of the game.
Biological brains demonstrate complex neural activity, where the timing and interplay between neurons is critical to how brains process information. Most deep learning architectures simplify neural activity by abstracting away temporal dynamics. In this paper we challenge that paradigm. By incorporating neuron-level processing and synchronization, we can effectively reintroduce neural timing as a foundational element. We present the Continuous Thought Machine (CTM), a model designed to leverage neural dynamics as its core representation. The CTM has two core innovations: (1) neuron-level temporal processing, where each neuron uses unique weight parameters to process a history of incoming signals; and (2) neural synchronization employed as a latent representation. The CTM aims to strike a balance between oversimplified neuron abstractions that improve computational efficiency, and biological realism. It operates at a level of abstraction that effectively captures essential temporal dynamics while remaining computationally tractable for deep learning. We demonstrate the CTM's strong performance and versatility across a range of challenging tasks, including ImageNet-1K classification, solving 2D mazes, sorting, parity computation, question-answering, and RL tasks. Beyond displaying rich internal representations and offering a natural avenue for interpretation owing to its internal process, the CTM is able to perform tasks that require complex sequential reasoning. The CTM can also leverage adaptive compute, where it can stop earlier for simpler tasks, or keep computing when faced with more challenging instances. The goal of this work is to share the CTM and its associated innovations, rather than pushing for new state-of-the-art results. To that end, we believe the CTM represents a significant step toward developing more biologically plausible and powerful artificial intelligence systems.