Independent Research Lab · United Kingdom

Advancing AI through
open science

We investigate the foundations of machine reasoning, computational intelligence, and safe AI systems — then publish everything. Our research artefacts span LLM cognition, formal optimisation, edge inference, and high-performance computing.

24 Research Artefacts
4 Research Domains
100% Open Access
LLM Reasoning Formal Optimisation Edge Inference Computational Intelligence Memory-Safe Systems Agent Cognition Quantitative Methods Vector Representations On-Device AI Prompt Theory Safe AI Systems Programmable Databases Signal Processing Constraint Solving LLM Reasoning Formal Optimisation Edge Inference Computational Intelligence Memory-Safe Systems Agent Cognition Quantitative Methods Vector Representations On-Device AI Prompt Theory Safe AI Systems Programmable Databases Signal Processing Constraint Solving

Four questions
driving our work.

Each research area addresses an open problem at the frontier of AI and computational science.

🧠

LLM Cognition & Prompt Theory

How do we formalise the relationship between prompt structure and model behaviour? We study declarative prompt specification, automatic optimisation, and routing — treating prompts as first-class research objects rather than ad-hoc strings.

🔬

Safe & Verifiable Computing

What does it take to run AI-generated code safely? Our research into memory-safe language design, container sandboxing, and NUMA-aware scheduling explores the systems foundations needed for trustworthy autonomous computation.

🧮

Formal Optimisation & Decision Science

Can natural language interface with mathematical solvers? We investigate the bridge between human intent and formally provable solutions — from constraint satisfaction to quantitative signal compilation and intelligent ranking algorithms.

📡

Edge Intelligence & On-Device AI

What are the limits of local inference? We study on-device LLM execution, mobile agent architectures, and privacy-preserving AI — exploring how much intelligence can live at the edge without any cloud dependency.

24 open-source projects.
Each one a hypothesis, tested.

Every repository is both a research contribution and a usable tool — MIT or GPL-3.0 licensed for the community.

LLM Cognition & Prompt Theory
Safe & Verifiable Computing
Formal Optimisation & Decision Science
Edge Intelligence & On-Device AI

Research that ships.
Science that scales.

Skelf Research operates at the boundary between academic inquiry and real-world systems. We believe the most important questions in AI today — about reasoning, safety, efficiency, and privacy — are best answered by building working prototypes and publishing everything.

Our methodology is simple: identify an open problem, construct a hypothesis as software, stress-test it against real workloads, and release the results. Every repository is a peer-reviewable experiment.

01

Hypotheses as Software

Each project encodes a research question. The codebase is the proof — runnable, testable, and falsifiable.

02

Open Science by Default

24 public repositories. Every experiment is reproducible, every finding is auditable by the global research community.

03

Systems-Level Rigour

We choose Rust, Zig, and Go not for fashion but for falsifiability — deterministic performance makes claims measurable.

04

Privacy as a Research Constraint

On-device inference and zero-trust architectures aren't add-ons — they're design constraints that shape better science.

Join the research.
Shape what's next.

We welcome academic collaborators, research partners, and funders who believe the hardest problems in AI deserve open, rigorous, reproducible investigation.