AI Audit Trail for Enterprise Decisions

Compliance and audit teams looking for traceable, reproducible AI decision logs that survive regulatory examination.

Das Problem

Every decision evaluated by Corules produces an immutable audit entry containing: policy set version, normalized input hash, actor identity (from signed claims, not self-report), outcome (ALLOW/BLOCK/ESCALATE), and the specific rule path that determined the outcome. Decisions are replayable: given the same policy version and normalized input, evaluation produces identical output. This means an auditor can take any historical decision, reproduce it exactly, and verify that the policy in effect at the time produced the stated outcome.

So löst Corules es

Corules's policy runtime evaluates structured context against compiled CEL expressions — returning ALLOW, BLOCK, or ESCALATE with a reason and audit ID.

Richtlinienbeispiel

// Every evaluation produces an audit record:
{
  "audit_id": "aud_01J...",
  "policy_set_version": "pset_v3.2.1",
  "input_hash": "sha256:4a7b...",
  "actor_id": "user_01J...",
  "outcome": "BLOCK",
  "violation": "discount_pct > max_discount_by_tier['standard']",
  "evaluated_at": "2026-02-23T08:14:22Z"
}

Frequently Asked Questions

How long are audit records retained?

Retention is configurable per tenant. The audit ledger is append-only — records cannot be modified or deleted. Retention periods are enforced by the platform.

Can I export audit records for external auditors?

Yes. The audit API supports querying by time range, actor, policy version, and outcome. Exports are available in JSON and CSV format.

What is input hash normalization?

Before hashing, the input payload is canonicalized (sorted keys, normalized whitespace). This ensures that two semantically identical inputs produce the same hash regardless of formatting differences.

Hören Sie auf, KI auf Vorschläge zu beschränken.

Kostenlos starten