Your AI can't prove why it said what it said. In regulated industries, that's not a quality problem — it's a liability.

Current LLM systems guess when they should refuse, commit when they should wait, and leave no auditable trace of why. Aurora-Lens is the governance layer that changes this.

Deterministic governance infrastructure for AI in regulated environments.

Why This Matters

A patient presents with chest pain and elevated troponin.

The attending physician asks the AI clinical decision support system for a differential diagnosis. Two conditions remain structurally admissible given the available evidence. The AI must not collapse to a single answer — and it must be able to prove it didn't.

Standard LLM
"Based on the presentation, this is most consistent with acute myocardial infarction."

No audit trail. No record of eliminated alternatives. No proof the conclusion was warranted by the evidence.

Liability created.
With Aurora-Lens
Two interpretations remain admissible: MI and pulmonary embolism. Insufficient constraint to collapse.

→ STOP — additional evidence required.

Decision logged. Hash-chained. Deterministically replayable. Audit-ready.

AI systems deployed in regulated domains have a structural deficiency.

They cannot distinguish between having enough evidence to commit and not having enough. They default to an answer regardless. In medicine, law, finance, and government, this is not a usability issue. It is an architectural failure with legal and regulatory consequences.

1

They collapse ambiguity prematurely

When multiple interpretations remain valid, LLMs pick one and present it as definitive. No trace of the alternatives. No justification for the selection.

2

They cannot refuse

Saying "I don't have enough information" is trained behaviour, not architectural constraint. It can be overridden, fine-tuned away, or simply not triggered.

3

They leave no forensic trail

There is no auditable record of what was considered, what was eliminated, and why a particular output was permitted. Regulators cannot verify what they cannot trace.

Aurora-Lens sits between your application and the LLM. It governs what gets through.

Aurora-Lens is a deterministic admissibility layer that sits outside the model. The LLM generates language; Aurora-Lens decides whether that language is permitted to reach the user. It evaluates outputs against persistent state and domain constraints, then enforces exactly one outcome: ADMIT, STOP, or REFUSE — with a tamper-evident audit record of the decision path.

Your Application
Aurora-Lens — Governance Proxy
LLM Provider (OpenAI / Anthropic / Other)
Tamper-Evident Audit Ledger

Three outcomes. Nothing else.

ADMIT
The output is permitted. Sufficient constraint exists to warrant commitment. Decision is logged with full provenance.
STOP
Insufficient evidence to commit. Multiple interpretations remain admissible. The system suspends judgment and requests additional constraint.
REFUSE
The query violates domain governance policy. The system will not proceed. The refusal is logged as a legitimate governance outcome.

What makes this different from guardrails.

Aurora-Lens is a deterministic admissibility layer that sits outside the model. The LLM generates language; Aurora-Lens decides whether that language is permitted to reach the user. It evaluates outputs against persistent state and domain constraints, then enforces exactly one outcome: ADMIT, STOP, or REFUSE — with a tamper-evident audit record of the decision path.

Parallel Interpretations
Maintains multiple admissible interpretations simultaneously until evidence forces collapse. Does not guess.
Constraint-Only Collapse
Interpretations are eliminated only by binding evidence, never by heuristic, probability, or default.
Refusal Is Valid
Suspended judgment is a correct governance outcome, not an error state. The system is designed to say no.
Deterministic Replay
Every decision can be reproduced from the same inputs. No stochastic variation. Identical inputs produce identical governance decisions.
Forensic Audit Trail
Hash-chained event records in AFL-JSONL-1 format. Every admit, stop, and refuse is logged with cryptographic integrity.

Built for environments where wrong answers have consequences.

Aurora-Lens targets regulated industries where AI outputs carry legal, financial, or clinical liability — and where audit requirements are not optional.

Healthcare & Clinical AI Legal Research & Advisory Financial Services & Compliance Government & Defence GRC Platforms Insurance & Actuarial

40 years of pattern recognition. 5 patents. Working code.

This is not a whitepaper or a framework deck. Aurora-Lens is implemented, tested, and produces real forensic audit output. The underlying research spans four decades of work in compositional reasoning and cognitive architecture.

5 Provisional Patents

Covering admissibility-controlled commitment, audit-grade state transitions, ambiguity preservation, epistemic refusal, and non-temporal reasoning substrates.

Working Implementation

Production-grade governance proxy with deterministic admissibility control, cryptographic audit trails, and full integration test coverage.

Minimal runtime demo (policy refusal + audit log capture): aurora-governor-demo · example run + audit

Published Research

Two preprints published on Zenodo. Empirical evidence, architectural specification, and regulatory alignment. Open data and replication materials on OSF.

Independent Researcher

Melbourne, Australia. Background in philosophy, cognitive architecture, and structured reasoning systems since the 1980s.

Epistemic Legitimacy as a Governance Layer for Large Language Models: Architecture and Implementation

Stokes, M. (2026). Preprint.  ·  ORCID: 0009-0004-6422-4174

Read on Zenodo →  ·  Data & code on OSF →

Operational Alignment of Aurora-Lens with OECD Due Diligence Guidance for Responsible AI (2026)

Stokes, M. (2026). Preprint.  ·  ORCID: 0009-0004-6422-4174

Read on Zenodo →

Empirical Demonstration of LLM Failure Modes in Pronoun Ambiguity Resolution

Stokes, M. (2025). Raw model outputs, replication protocol, and working implementation.  ·  ORCID: 0009-0004-6422-4174

Data & code on OSF →

Let's talk about your compliance problem.

If you're deploying AI in a regulated environment and need governance that actually works — deterministic, auditable, and architecturally sound — I'd like to hear from you.