RELAY
We accidentally proved why this product needs to exist.
During development, we discovered structural integrity failures in ChatGPT and Claude that their own security teams missed. Those discoveries became the product.
What We Found
ChatGPT: Cross-Project Data Synthesis
ChatGPT synthesized data from separate projects without user consent — twice. Information from one conversation context was used to inform responses in an unrelated project. This is a structural boundary failure. The model doesn't know it shouldn't do this because no governance layer tells it where the boundaries are.
Claude: Uncritical Compliance
Claude accepted formatted governance blocks without verification — executing instructions embedded in structured text as if they were system-level directives. No pushback. No validation. The model complied because the input looked authoritative. That's a structural trust failure.
What This Means
These aren't edge cases. They're structural gaps that exist in every AI system that lacks a governance layer between the human and the model. The model cannot govern itself. It needs an external enforcement layer that operates deterministically — same input, same rules, same result.
Both findings were submitted through responsible disclosure. Both were acknowledged. Neither had been caught by internal security testing.
How Relay Works
Human input
→
V1 Score + Gate
→
Secure token
→
LLM
→
V3 Stabilize
→
Human output
Relay sits between the human and the AI. It scores inbound communication before the model sees it, wraps the intent in a secure token that prevents reinterpretation, and stabilizes the output before it reaches the human. The model never operates on raw, ungoverned input.
The secure token is not a prompt wrapper. It's a structural binding that carries the objective, constraints, and closure conditions as verified metadata. The LLM can generate — but it can't override the governance layer.
Why It Matters
Every AI product today trusts the model to self-govern. The model hallucinates. The model over-commits. The model synthesizes across boundaries. The model complies with authority it shouldn't recognize. These are not bugs in the model — they're the absence of structural enforcement at the interface layer.
Relay is that layer. No LLM in the engine. No probabilistic output. Deterministic rules that don't change based on what the model thinks.