For CISO

Runtime enforcement.Versioned audit trails.Every decision.

You cannot approve AI autonomy without deterministic enforcement and immutable audit records. Corules ensures every AI decision is validated against versioned policy before execution — and logged in a tamper-proof ledger.

Three things that block CISO approval

No deterministic enforcement in the execution path

Risk

AI agents can propose — and execute — actions that violate policy if there is no runtime enforcement layer. Non-deterministic outputs from LLMs cannot be trusted to stay within compliance bounds without an independent validation step.

How Corules addresses it

Corules evaluates every AI-proposed action against compiled CEL policy before it executes. The enforcement is deterministic — same input, same policy, same outcome. No LLM in the enforcement path.

No immutable audit trail for AI decisions

Risk

If you cannot show what decision was made, when, by which AI, under which policy, and why — you cannot defend it to an auditor, regulator, or board. Email threads and application logs do not constitute audit-grade evidence.

How Corules addresses it

Every Corules decision is written to an append-only ledger with policy_set_version, normalized input hash, actor identity, outcome, and violations. The record cannot be modified. Any past decision is replayable with bit-identical results.

No versioned policy control

Risk

When a policy changes, what happened to the decisions made under the old policy? Without versioning, you cannot show that a decision made six months ago was compliant under the rules that were in force at that time.

How Corules addresses it

Every published policy set carries a version identifier. Every audit record references the version that was active. Historical decisions remain linked to the policy that governed them — regardless of subsequent changes.

Compliance framework alignment

Corules is designed to satisfy the operational requirements of the major enterprise AI compliance frameworks — not as an overlay, but by design.

SOC 2 Type II

CC6.1 (Logical access), CC7.2 (Monitoring), A1.2 (Availability)

Immutable audit log provides evidence for access control decisions. Policy versions provide evidence of change management. ESCALATE-on-failure satisfies availability control requirements.

ISO 27001

A.12.4 (Logging), A.14.2 (Secure development), A.18.1 (Compliance)

Versioned policy-as-code satisfies secure development requirements. Append-only ledger satisfies logging requirements. Policy review workflow satisfies compliance monitoring.

EU AI Act

Article 9 (Risk management), Article 12 (Record-keeping), Article 14 (Human oversight)

Corules provides the deterministic control layer required for high-risk AI systems. Audit logs satisfy Article 12 record-keeping. ESCALATE mechanism satisfies Article 14 human oversight requirements.

NIST AI RMF

GOVERN 1.1, MAP 1.5, MEASURE 2.5, MANAGE 1.3

Policy-as-code operationalizes the GOVERN function. Runtime enforcement operationalizes MANAGE. Immutable audit log supports MEASURE. Versioned policies support MAP requirements.

See also: EU AI Act compliance → · NIST AI RMF → · Compliance by design →

Safe defaults by architecture

Corules is designed to fail safe, not open. On any evaluation failure — timeout, service unavailability, malformed input — the response is always ESCALATE, never ALLOW.

No implicit trust

Actor identity is never accepted from chat text or LLM output. It must come from signed claims or identity resolution — verifiable, not asserted.

Fail-safe on ambiguity

When policy evaluation is uncertain, incomplete, or errors, the outcome defaults to ESCALATE — routing to human review rather than silently allowing.

Least privilege for agents

AI agents can only take actions that are explicitly permitted by the compiled policy set for their context and actor role. Everything else is blocked.

Separation of duties

No single system — including AI — has authority to both propose and execute decisions. Corules enforces this architectural separation at runtime.

Questions from CISOs

How does Corules prevent unauthorized AI actions?

Every AI-proposed action must pass through /v1/validate before it executes. The evaluation checks the action against the compiled CEL policy set for that use case. Actions outside policy bounds return BLOCK. Ambiguous actions return ESCALATE. Nothing executes unless the policy explicitly ALLOWs it — fail-safe by default.

How do we produce audit evidence for SOC 2 or internal audit?

Every Corules decision is written to an append-only audit ledger with: tenant_id, use_case_id, actor_id, policy_set_version, normalized input hash, outcome (ALLOW/BLOCK/ESCALATE), violations (if any), timestamp, and correlation_id. You query the ledger and export evidence for any time window. The log cannot be modified after the fact.

What happens if Corules itself has an outage?

Corules is configured with a safe-default behavior: on any evaluation failure (timeout, unavailability, unexpected error), the response is ESCALATE — never silently ALLOW. AI agents that call Corules must handle ESCALATE by routing to human review. The enforcement path fails safe, not open.

How do policy changes affect historical audit records?

Policy sets are versioned. Every audit record carries the policy_set_version that was active at the time of the decision. When policy changes, new decisions evaluate against the new version. Historical records remain immutably linked to the version that was active — so any past decision can be replayed with bit-identical results, regardless of subsequent policy changes.

Make AI decisions defensible to your auditors.

Runtime enforcement, versioned policy, and immutable audit logs — the three things required to approve AI autonomy.

Request access

For enterprise security and risk teams.