Current LLM systems guess when they should refuse, commit when they should wait, and leave no auditable trace of why. Aurora-Lens is the governance layer that changes this.
Deterministic governance infrastructure for AI in regulated environments.
The attending physician asks the AI clinical decision support system for a differential diagnosis. Two conditions remain structurally admissible given the available evidence. The AI must not collapse to a single answer — and it must be able to prove it didn't.
They cannot distinguish between having enough evidence to commit and not having enough. They default to an answer regardless. In medicine, law, finance, and government, this is not a usability issue. It is an architectural failure with legal and regulatory consequences.
When multiple interpretations remain valid, LLMs pick one and present it as definitive. No trace of the alternatives. No justification for the selection.
Saying "I don't have enough information" is trained behaviour, not architectural constraint. It can be overridden, fine-tuned away, or simply not triggered.
There is no auditable record of what was considered, what was eliminated, and why a particular output was permitted. Regulators cannot verify what they cannot trace.
Aurora-Lens is a deterministic admissibility layer that sits outside the model. The LLM generates language; Aurora-Lens decides whether that language is permitted to reach the user. It evaluates outputs against persistent state and domain constraints, then enforces exactly one outcome: ADMIT, STOP, or REFUSE — with a tamper-evident audit record of the decision path.
Three outcomes. Nothing else.
Aurora-Lens is a deterministic admissibility layer that sits outside the model. The LLM generates language; Aurora-Lens decides whether that language is permitted to reach the user. It evaluates outputs against persistent state and domain constraints, then enforces exactly one outcome: ADMIT, STOP, or REFUSE — with a tamper-evident audit record of the decision path.
Aurora-Lens targets regulated industries where AI outputs carry legal, financial, or clinical liability — and where audit requirements are not optional.
This is not a whitepaper or a framework deck. Aurora-Lens is implemented, tested, and produces real forensic audit output. The underlying research spans four decades of work in compositional reasoning and cognitive architecture.
Covering admissibility-controlled commitment, audit-grade state transitions, ambiguity preservation, epistemic refusal, and non-temporal reasoning substrates.
Production-grade governance proxy with deterministic admissibility control, cryptographic audit trails, and full integration test coverage.
Minimal runtime demo (policy refusal + audit log capture): aurora-governor-demo · example run + audit
Two preprints published on Zenodo. Empirical evidence, architectural specification, and regulatory alignment. Open data and replication materials on OSF.
Melbourne, Australia. Background in philosophy, cognitive architecture, and structured reasoning systems since the 1980s.