Glossary
Enterprise AI governance vocabulary
25 terms across 6 categories — the language executives need to discuss AI governance, runtime enforcement, and audit defensibility internally and with regulators.
Policy Concepts
Policy-as-Code
Expressing organizational policies as machine-readable, version-controlled code that can be automatically enforced.
Read definition →
Runtime Enforcement
Evaluating compliance at the exact moment of execution — before an action completes — rather than auditing after the fact.
Read definition →
Deterministic Validation
A validation process that produces the same outcome for the same input, every time, without exception.
Read definition →
Execution Gating
A mandatory control point that every AI-proposed action must pass before the downstream system executes it.
Read definition →
Business Rule Engine
A system that evaluates structured business logic independently of application code, enabling rules to be changed without code deployments.
Read definition →
Governance & Compliance
AI Governance
The framework of policies, controls, accountability structures, and oversight mechanisms that govern how AI systems are developed, deployed, and operated.
Read definition →
Model Risk Management (MRM)
The systematic process of identifying, measuring, and controlling risks arising from the use of AI and machine learning models in business decisions.
Read definition →
NIST AI Risk Management Framework (AI RMF)
The US National Institute of Standards and Technology's voluntary framework for managing risks associated with AI systems, built around four core functions: Govern, Map, Measure, and Manage.
Read definition →
EU AI Act Compliance
Meeting the requirements of the EU Artificial Intelligence Act for high-risk AI systems, including mandatory risk management, data governance, transparency, and audit logging obligations.
Read definition →
Responsible AI Controls
The operational mechanisms that make AI behavior accountable, fair, transparent, and auditable in practice — not just in policy.
Read definition →
Security Architecture
Zero Trust for AI
Never trust an AI output implicitly. Verify every AI-proposed action against policy before execution, regardless of the source system or model.
Read definition →
Least Privilege for AI Agents
AI agents should only be able to take actions that are explicitly permitted by policy — with no default authority to act beyond their defined scope.
Read definition →
Action Authorization
The process of confirming that an AI-proposed action is within the permitted scope of the actor, under current policy, before execution.
Read definition →
Separation of Duties (for AI)
No single system — including AI — should have authority to both propose a decision and execute it without independent validation.
Read definition →
Fail-Safe Defaults
When policy evaluation is uncertain or the enforcement system encounters an error, the default is to block or escalate — never to silently allow.
Read definition →
Audit & Traceability
Decision Traceability
The ability to reconstruct exactly why a specific decision was made — with the complete context, policy version, actor identity, and evaluation path — at any point in the future.
Read definition →
Immutable Audit Log
An append-only record of every AI decision that cannot be modified, deleted, or altered after the fact — providing tamper-proof evidence for audit and regulatory purposes.
Read definition →
Versioned Policies
Policy rules that carry explicit version identifiers, enabling any historical decision to be replayed against the exact policy logic that was active when the decision was made.
Read definition →
Explainability (AI Decision Explainability)
The capacity to provide a human-understandable, factual reason for why a specific AI decision produced a specific outcome.
Read definition →
Compliance by Design
Building compliance controls into the system architecture from the start — so that non-compliant actions are prevented by design, not detected after the fact.
Read definition →
Operational Framing
Human-in-the-Loop (HITL)
An oversight model where every AI decision or recommendation requires human review and approval before any action is taken.
Read definition →
Human-on-the-Loop (HOTL)
An oversight model where AI executes decisions within defined policy bounds autonomously, with humans monitoring and able to intervene — but not approving every decision.
Read definition →
Threshold-Based Escalation
A policy pattern where decisions that exceed defined numeric limits are automatically routed to human review, while decisions within limits are handled autonomously.
Read definition →
AI Control
AI Control Plane
A centralized enforcement and governance layer that defines, enforces, and audits what AI agents are permitted to do across all workflows and systems.
Read definition →
Autonomous AI Agent Risk
The operational, regulatory, and reputational risk introduced when AI agents take business actions without deterministic policy controls and audit-grade traceability.
Read definition →
See these concepts in action
Corules implements every concept in this glossary — policy-as-code, runtime enforcement, versioned audit logs, and deterministic validation — in a single runtime you deploy once.
Request access