Explainable AI Decision Reasoning for Enterprise

Teams needing AI decisions to be understandable to humans — for ECOA, FCRA, GDPR, and internal accountability requirements.

El problema

Corules decisions are inherently explainable because they are produced by deterministic CEL expressions, not model weights. When a decision is BLOCKED, the response includes the specific rule that failed and the exact values that caused the failure. A credit denial says: 'DTI ratio 52% exceeded maximum of 43%.' Not: 'model score below threshold.' This satisfies ECOA adverse action requirements, GDPR right-to-explanation requirements, and internal accountability needs simultaneously.

Cómo lo resuelve Corules

Corules's policy runtime evaluates structured context against compiled CEL expressions — returning ALLOW, BLOCK, or ESCALATE with a reason and audit ID.

Ejemplo de política

// Every BLOCK response includes human-readable violation:
{
  "outcome": "BLOCK",
  "violations": [
    {
      "rule": "dti_ratio <= params.max_dti",
      "actual": 0.52,
      "limit": 0.43,
      "explanation": "Debt-to-income ratio 52% exceeds maximum of 43%."
    }
  ],
  "adverse_action_reasons": ["Debt-to-income ratio too high"]
}

Frequently Asked Questions

Can the explanation language be customized for different audiences?

Yes. Internal explanations can include rule references and parameter values. Customer-facing explanations can be written in plain language, configured per use case.

Does this satisfy GDPR Article 22 right to explanation?

Corules provides the technical basis for explanation — specific factors, their values, and the policy rule. Legal review of how this maps to GDPR obligations in your jurisdiction is separate.

Deja de limitar la IA a sugerencias.

Comenzar gratis