Research Blog
Technical articles and accessible explanations from across our four research domains.
The story of building a Lua-native database in Lua, then rewriting it in Rust with RocksDB — and what the performance tells us.
Why traditional search retrieves first and ranks later — and how slorg inverts this by understanding intent before fetching results.
What happens when LLM agents need to remember across sessions — structured memory schemas, retrieval strategies, and the memory-context distinction.
Building approximate nearest-neighbour search on SQLite in pure Rust — and why you might not need a dedicated vector database.
Why AI agents need scoped, time-limited credentials — and how perishable implements zero-trust patterns for LLM API access.
A practical framework for managing prompts as versioned dependencies — tackling drift, regression, and reproducibility.
How compere uses MAB algorithms to rank items effectively with minimal pairwise feedback — applications in search and recommendation.
Why treating prompts as typed, portable artefacts changes how we reason about LLM behaviour — and how promptel implements this idea.
Describe optimisation problems in plain English and receive mathematically guaranteed solutions — no PhD required.
How route-switch uses MIPROv2 to automatically select the right model for each query — balancing cost, quality, and latency.
Building AI agents that browse, observe, and automate tasks entirely on-device — the autonomy-safety spectrum on mobile.
LLM agents write systems code, but C is unsafe and Rust is hard to generate. fastC explores the middle path.
A post-mortem on building a local LLM serving layer — llama.cpp integration, model management, and where existing tools constrain research.
What happens when you run a full LLM on mobile hardware with zero cloud dependency — memory, latency, and model quality on consumer devices.
How zviz uses Zig's comptime capabilities to build gVisor-inspired sandboxing with near-zero runtime cost.
Language choice as research methodology — how memory-safe, deterministic-performance languages produce falsifiable systems claims.
From visual signal specification to verified Rust executable — how sigc turns alpha hypotheses into production-ready code in minutes.
The case for radical openness in AI research — reproducibility, falsifiability, and community trust through 24 open-source projects.