Runtime Governance and Execution Control Plane for AI Systems
RIO converts goals into structured intent, evaluates risk and policy, requires approval when necessary, controls execution, verifies outcomes, and generates cryptographic receipts stored in a tamper-evident ledger. Each decision feeds back into the system to improve future governance, creating a closed loop of authorized, verified, and auditable execution.
Translate vague goals into structured intents before governance.
Also known as: Intake Translation Layer, Universal Grammar Layer, Goal-to-Intent Layer
Control and authorize all actions before execution.
Improve future decisions and governance policies.
The AI agent or automated system submits a raw action request — which may be a vague goal or a structured command. The system assigns a unique request ID, records the timestamp, and verifies the requester's identity against the IAM registry. Unknown or inactive users are rejected immediately.
New in the Three-Loop Architecture. If the request is a vague goal rather than a structured action, the Intake Discovery Loop activates. The system detects missing information, uses AI-assisted refinement to clarify the intent, and iterates until a complete, machine-readable structured intent can be produced. This ensures governance always operates on well-defined intents, not ambiguous requests.
The system identifies the action type (e.g., transfer_funds, send_email, delete_data) and assigns an initial risk category — LOW, MEDIUM, HIGH, or CRITICAL — based on the action and the requester's role. Classification does not make policy decisions; it provides inputs for the engines that do.
The Policy Engine evaluates the intent against active rules, returning ALLOW, BLOCK, or REQUIRE_APPROVAL. The Risk Engine computes a numeric score from four components: base risk (action type), role modifier (requester's role), amount modifier (financial or data volume), and target modifier (sensitivity of the target system).
If the policy decision is ALLOW and risk is below threshold, an authorization token is issued automatically. If REQUIRE_APPROVAL, the request is escalated to a human approver. The approver sees the full context: who requested it, what action, what risk score, why it was escalated. Upon approval, a time-bound, single-use, nonce-protected Execution Token is generated.
The hard enforcement boundary. The gate verifies the authorization token's signature, checks that the nonce has not been consumed, confirms the token has not expired, and checks the kill switch. Only if all conditions pass does the gate open and dispatch the action to the appropriate adapter (email, file, HTTP, etc.).
New in v2. After execution completes, the verification stage computes three SHA-256 hashes: intent_hash (binding the intent ID, action, and requester to the request timestamp), action_hash (binding the action type and parameters), and verification_hash (binding intent_hash + action_hash + execution status). These three hashes cryptographically prove that the action executed matches the action that was authorized. The verification_status is set to ‘verified’ on success.
A v2 cryptographic receipt is generated for every outcome — approved, denied, or blocked. The receipt contains intent_hash, action_hash, verification_hash, verification_status, risk score, risk level, policy decision, three ISO 8601 timestamps (request, approval, execution), and an Ed25519 signature over the receipt hash. Denial receipts are also generated for blocked actions, ensuring every decision is recorded.
The final stage appends a hash-linked entry to the v2 tamper-evident audit ledger. Each entry contains the receipt_hash (linking it to the receipt), the previous_hash (linking to the prior ledger entry), the current_hash (computed from all entry data), and its own ledger_signature. Any modification to any entry invalidates all subsequent hashes, making tampering immediately detectable.
After every pipeline execution, the Learning Loop analyzes outcomes to improve future governance. Learning cannot bypass governance, cannot execute actions, and policy updates must go through governance before deployment.
The Governed Corpus stores every pipeline decision as structured data. Patterns are extracted: which actions are most frequently denied, which policies trigger the most escalations, which risk scores cluster near thresholds.
Insights feed back into policy refinement and risk model tuning. Policy updates themselves must go through governance before deployment — the Learning Loop cannot bypass the Execution/Governance Loop.
The Replay Engine re-evaluates historical decisions under modified policies. The Simulation API enables what-if analysis without affecting live operations. Both tools validate proposed changes before they reach production.
Policy allows, authorization valid, execution succeeds, verification passes. v2 receipt (with intent_hash, action_hash, verification_hash) + signed ledger entry produced.
Policy denies or human approver denies. v2 denial receipt + signed ledger entry produced. No execution occurs. Full audit trail preserved.
Kill switch engaged, verification fails, or system failure. v2 blocked receipt + signed ledger entry produced. Fail-closed enforcement.
See the pipeline in action with real cryptographic enforcement.
Try Demo 4 — Full Pipeline