Survey: Graph Databases
Graph databases have become essential tools for managing complex and
interconnected data, which is common in areas like social networks,
bioinformatics, and recommendation systems. Unlike traditional relational
databases, graph databases offer a more natural way to model and query
intricate relationships, making them particularly effective for applications
that demand flexibility and efficiency in handling interconnected data.
Despite their increasing use, graph databases face notable challenges. One
significant issue is the irregular nature of graph data, often marked by
structural sparsity, such as in its adjacency matrix representation, which can
lead to inefficiencies in data read and write operations. Other obstacles
include the high computational demands of traversal-based queries, especially
within large-scale networks, and complexities in managing transactions in
distributed graph environments. Additionally, the reliance on traditional
centralized architectures limits the scalability of Online Transaction
Processing (OLTP), creating bottlenecks due to contention, CPU overhead, and
network bandwidth constraints.
This paper presents a thorough survey of graph databases. It begins by
examining property models, query languages, and storage architectures,
outlining the foundational aspects that users and developers typically engage
with. Following this, it provides a detailed analysis of recent advancements in
graph database technologies, evaluating these in the context of key aspects
such as architecture, deployment, usage, and development, which collectively
define the capabilities of graph database solutions.
A Survey of In-Network Systems for Intelligent, High-Efficiency
AI and Topology Optimization
In-network computation represents a transformative approach to addressing the
escalating demands of Artificial Intelligence (AI) workloads on network
infrastructure. By leveraging the processing capabilities of network devices
such as switches, routers, and Network Interface Cards (NICs), this paradigm
enables AI computations to be performed directly within the network fabric,
significantly reducing latency, enhancing throughput, and optimizing resource
utilization. This paper provides a comprehensive analysis of optimizing
in-network computation for AI, exploring the evolution of programmable network
architectures, such as Software-Defined Networking (SDN) and Programmable Data
Planes (PDPs), and their convergence with AI. It examines methodologies for
mapping AI models onto resource-constrained network devices, addressing
challenges like limited memory and computational capabilities through efficient
algorithm design and model compression techniques. The paper also highlights
advancements in distributed learning, particularly in-network aggregation, and
the potential of federated learning to enhance privacy and scalability.
Frameworks like Planter and Quark are discussed for simplifying development,
alongside key applications such as intelligent network monitoring, intrusion
detection, traffic management, and Edge AI. Future research directions,
including runtime programmability, standardized benchmarks, and new
applications paradigms, are proposed to advance this rapidly evolving field.
This survey underscores the potential of in-network AI to create intelligent,
efficient, and responsive networks capable of meeting the demands of
next-generation AI applications.
Software Architecture Meets LLMs: A Systematic Literature Review
Large Language Models (LLMs) are used for many different software engineering
tasks. In software architecture, they have been applied to tasks such as
classification of design decisions, detection of design patterns, and
generation of software architecture design from requirements. However, there is
little overview on how well they work, what challenges exist, and what open
problems remain. In this paper, we present a systematic literature review on
the use of LLMs in software architecture. We analyze 18 research articles to
answer five research questions, such as which software architecture tasks LLMs
are used for, how much automation they provide, which models and techniques are
used, and how these approaches are evaluated. Our findings show that while LLMs
are increasingly applied to a variety of software architecture tasks and often
outperform baselines, some areas, such as generating source code from
architectural design, cloud-native computing and architecture, and checking
conformance remain underexplored. Although current approaches mostly use simple
prompting techniques, we identify a growing research interest in refining
LLM-based approaches by integrating advanced techniques.
A Survey on Open-Source Edge Computing Simulators and Emulators: The
Computing and Networking Convergence Perspective
Edge computing, with its low latency, dynamic scalability, and location
awareness, along with the convergence of computing and communication paradigms,
has been successfully applied in critical domains such as industrial IoT, smart
healthcare, smart homes, and public safety. This paper provides a comprehensive
survey of open-source edge computing simulators and emulators, presented in our
GitHub repository (https://github.com/qijianpeng/awesome-edge-computing),
emphasizing the convergence of computing and networking paradigms. By examining
more than 40 tools, including CloudSim, NS-3, and others, we identify the
strengths and limitations in simulating and emulating edge environments. This
survey classifies these tools into three categories: packet-level,
application-level, and emulators. Furthermore, we evaluate them across five
dimensions, ranging from resource representation to resource utilization. The
survey highlights the integration of different computing paradigms, packet
processing capabilities, support for edge environments, user-defined metric
interfaces, and scenario visualization. The findings aim to guide researchers
in selecting appropriate tools for developing and validating advanced computing
and networking technologies.