The moment the system is about to answer, act, or advise. If governance can't intervene at that moment — at machine speed — it isn't governing anything. It's documenting intentions.
"Who can stop this, right now — and on what basis?"
If the answer is a committee, a policy, a review process, or post-deployment monitoring — then governance is not controlling the system. It is describing intentions after the fact. AI changes the timing. A system can generate thousands of outputs before a human meeting can even be scheduled.
Only one of them can stop a system at the moment of impact.
Defines values, roles, and process expectations. Works when systems are slow, stable, and human-mediated.
Fails under pressure because it lacks a hard stop mechanism. Produces "we meant well" evidence — not enforceable control.
Governs the system inside the execution path. Can halt, refuse, or require clarification at the moment of impact.
Creates evidence that survives incident review and compliance audit. Turns policy into a runtime boundary — not a PDF.
If a system cannot reliably produce STOP, CLARIFY, or REFUSE when needed, it does not have governance. It has hope.
Enforceable runtime governance means certain outcomes are mandatory and auditable — not optional behaviours the model might or might not produce depending on how it was trained.
Aurora-Lens is built around a simple contract: governance decisions are explicit outcomes, not silent failure modes. Every decision — including every non-answer — produces a forensic envelope that is hash-chained, deterministically serialised, and replayable under scrutiny.
The point is not to add governance later. The point is to make governance inseparable from execution — so the system can be stopped, not merely explained.