For CTO

A deterministic execution gate for probabilistic AI.

AI is probabilistic by design. Enterprise decisions are deterministic by requirement. Corules compiles your policy rules into a CEL runtime that evaluates every AI-proposed action before it executes.

The probabilistic / deterministic gap

LLMs are probabilistic — the same input can produce different outputs. Business rules are deterministic — the same input must produce the same outcome. These two requirements are architecturally incompatible without an enforcement layer between them.

The pattern that resolves this: AI proposes, policy enforces, system defends. The AI generates recommendations. A deterministic runtime evaluates whether each recommendation falls within compiled policy constraints before any action executes. No LLM in the enforcement path.

AI proposes

LLM generates a structured recommendation based on context.

Policy enforces

Corules evaluates the proposal against compiled CEL constraints in microseconds.

System defends

Outcome (ALLOW / BLOCK / ESCALATE) is logged immutably with full audit record.

Policy-as-code architecture

Corules uses a two-gate evaluation model. Both gates are evaluated against compiled CEL policy sets — no LLM, no variability, no ambiguity.

Gate 1POST /v1/constraints

What can the AI propose?

Given context (customer tier, deal value, actor role) and a compiled policy set, returns the constraint bounds the AI must stay within when generating a recommendation. Called before the AI generates output.

Gate 2POST /v1/validate

Is this specific proposal within bounds?

Given context + actor + the AI's specific proposed action, returns ALLOW, BLOCK, or ESCALATE — with violations, reason, and an audit_id. Called after the AI generates output, before execution.

Policy example: discount approval

Policies are written in CEL and compiled at publish time. Evaluation at request time is purely deterministic — no LLM, no variability.

// Gate 1: Constraints — what is the AI allowed to propose?
// Returns allowed action bounds BEFORE the AI generates a recommendation.
discount_pct <= params.max_discount_by_tier[context.customer_tier]
  && (context.deal_value * (1 - discount_pct)) >= params.margin_floor

// Gate 2: Validate — is the AI's specific proposed action within bounds?
// Returns ALLOW, BLOCK, or ESCALATE with a reason and audit_id.
context.proposed_discount <= constraints.max_discount
  && context.proposed_discount <= params.override_ceiling
  && !(context.customer_tier == "standard" && context.proposed_discount > 0.15)

Expressions reference structured context (from the agent) and params (tenant-configurable values stored separately from policy logic).

Policy-as-code → · Deterministic validation → · Browse use cases →

Integration patterns

REST API

Any agent or workflow calls /v1/constraints and /v1/validate over HTTPS. OpenAPI 3.1 spec provided. Responses include structured decision objects with correlation_id for distributed tracing.

MCP server

Corules exposes an MCP (Model Context Protocol) server — compatible with Claude, GPT-4, and any MCP-aware agent framework. Agents call get_constraints and validate_decision as tool calls.

Salesforce native

Apex and Flow templates call Corules via Named Credentials. Evaluations are synchronous and return within SLA for inline approval flows.

Power Platform connector

Custom connector wraps /v1/validate for use in Power Automate. No-code enforcement gate for Copilot Studio and Power Apps flows.

Technical questions from CTOs

Why CEL instead of a rules DSL or custom expression language?

CEL (Common Expression Language) is a Google-developed expression language that is deterministic, sandboxed, type-checked, and compilable. It was designed specifically for policy evaluation — not general computation. CEL expressions cannot perform I/O, have no side effects, and produce the same output for the same input every time. It is used in Kubernetes admission controllers and Google IAM for the same reasons.

How does the two-gate architecture work?

Gate 1 (Constraints, /v1/constraints) returns what the AI is allowed to propose, given the current context and actor. The AI generates a recommendation within those bounds. Gate 2 (Validate, /v1/validate) confirms the specific proposed action falls within the constraint set and returns ALLOW, BLOCK, or ESCALATE. Both gates evaluate in microseconds from compiled policy sets.

How are policies versioned and deployed without downtime?

Policy sets are published atomically. Corules compiles the new CEL expressions, stores the compiled set, and activates the new version. In-flight requests complete against the previous version. New requests evaluate against the new version. Every decision is linked to the policy set version that was active — so historical decisions are replayable with bit-identical results.

Is there vendor lock-in if we adopt Corules?

No. Policies are written in CEL — an open standard. The REST API (/v1/constraints, /v1/validate) follows a documented OpenAPI 3.1 spec. You can switch the underlying enforcement engine or implement a compatible endpoint yourself. Corules does not embed logic into your AI agents — it sits alongside them as a callable service.

See the architecture in your environment.

CEL policy-as-code. Two-gate evaluation. OpenAPI 3.1. Compile once, enforce everywhere.

Request access

For engineering and architecture teams.