Leveraging Time-Series Foundation Models in Smart Agriculture for Soil
Moisture Forecasting
The recent surge in foundation models for natural language processing and
computer vision has fueled innovation across various domains. Inspired by this
progress, we explore the potential of foundation models for time-series
forecasting in smart agriculture, a field often plagued by limited data
availability. Specifically, this work presents a novel application of
$\texttt{TimeGPT}$, a state-of-the-art (SOTA) time-series foundation model, to
predict soil water potential ($\psi_\mathrm{soil}$), a key indicator of field
water status that is typically used for irrigation advice. Traditionally, this
task relies on a wide array of input variables. We explore
$\psi_\mathrm{soil}$'s ability to forecast $\psi_\mathrm{soil}$ in: ($i$) a
zero-shot setting, ($ii$) a fine-tuned setting relying solely on historic
$\psi_\mathrm{soil}$ measurements, and ($iii$) a fine-tuned setting where we
also add exogenous variables to the model. We compare $\texttt{TimeGPT}$'s
performance to established SOTA baseline models for forecasting
$\psi_\mathrm{soil}$. Our results demonstrate that $\texttt{TimeGPT}$ achieves
competitive forecasting accuracy using only historical $\psi_\mathrm{soil}$
data, highlighting its remarkable potential for agricultural applications. This
research paves the way for foundation time-series models for sustainable
development in agriculture by enabling forecasting tasks that were
traditionally reliant on extensive data collection and domain expertise.
Federated Assemblies
A citizens' assembly is a group of people who are randomly selected to
represent a larger population in a deliberation. While this approach has
successfully strengthened democracy, it has certain limitations that suggest
the need for assemblies to form and associate more organically. In response, we
propose federated assemblies, where assemblies are interconnected, and each
parent assembly is selected from members of its child assemblies. The main
technical challenge is to develop random selection algorithms that meet new
representation constraints inherent in this hierarchical structure. We design
and analyze several algorithms that provide different representation guarantees
under various assumptions on the structure of the underlying graph.
ChatGPT as the Marketplace of Ideas: Should Truth-Seeking Be the Goal of
AI Content Governance?
As one of the most enduring metaphors within legal discourse, the marketplace
of ideas has wielded considerable influence over the jurisprudential landscape
for decades. A century after the inception of this theory, ChatGPT emerged as a
revolutionary technological advancement in the twenty-first century. This
research finds that ChatGPT effectively manifests the marketplace metaphor. It
not only instantiates the promises envisaged by generations of legal scholars
but also lays bare the perils discerned through sustained academic critique.
Specifically, the workings of ChatGPT and the marketplace of ideas theory
exhibit at least four common features: arena, means, objectives, and flaws.
These shared attributes are sufficient to render ChatGPT historically the most
qualified engine for actualizing the marketplace of ideas theory.
The comparison of the marketplace theory and ChatGPT merely marks a starting
point. A more meaningful undertaking entails reevaluating and reframing both
internal and external AI policies by referring to the accumulated experience,
insights, and suggestions researchers have raised to fix the marketplace
theory. Here, a pivotal issue is: should truth-seeking be set as the goal of AI
content governance? Given the unattainability of the absolute truth-seeking
goal, I argue against adopting zero-risk policies. Instead, a more judicious
approach would be to embrace a knowledge-based alternative wherein large
language models (LLMs) are trained to generate competing and divergent
viewpoints based on sufficient justifications. This research also argues that
so-called AI content risks are not created by AI companies but are inherent in
the entire information ecosystem. Thus, the burden of managing these risks
should be distributed among different social actors, rather than being solely
shouldered by chatbot companies.
Why Algorithms Remain Unjust: Power Structures Surrounding Algorithmic
Activity
Algorithms play an increasingly-significant role in our social lives.
Unfortunately, they often perpetuate social injustices while doing so. The
popular means of addressing these algorithmic injustices has been through
algorithmic reformism: fine-tuning the algorithm itself to be more fair,
accountable, and transparent. While commendable, the emerging discipline of
critical algorithm studies shows that reformist approaches have failed to
curtail algorithmic injustice because they ignore the power structure
surrounding algorithms. Heeding calls from critical algorithm studies to
analyze this power structure, I employ a framework developed by Erik Olin
Wright to examine the configuration of power surrounding Algorithmic Activity:
the ways in which algorithms are researched, developed, trained, and deployed
within society. I argue that the reason Algorithmic Activity is unequal,
undemocratic, and unsustainable is that the power structure shaping it is one
of economic empowerment rather than social empowerment. For Algorithmic
Activity to be socially just, we need to transform this power configuration to
empower the people at the other end of an algorithm. To this end, I explore
Wright's symbiotic, interstitial, and raptural transformations in the context
of Algorithmic Activity, as well as how they may be applied in a hypothetical
research project that uses algorithms to address a social issue. I conclude
with my vision for socially just Algorithmic Activity, asking that future work
strives to integrate the proposed transformations and develop new mechanisms
for social empowerment.