EU AI Act High-Risk AI System Compliance Requirements

Legal, compliance, and technology teams identifying what operational controls EU AI Act Article 9–15 requires for high-risk AI systems.

The answer

The EU AI Act classifies AI systems used in employment, credit, insurance, education, and essential services as high-risk. High-risk systems must satisfy requirements in Articles 9–15, including: risk management (Article 9), data governance (Article 10), technical documentation (Article 11), record-keeping (Article 12), transparency (Article 13), human oversight (Article 14), and accuracy/robustness (Article 15). The enforcement gap for most enterprises is Articles 9, 12, and 14 — runtime risk controls, decision records, and human oversight mechanisms. Corules addresses all three operationally: CEL policy enforcement implements Article 9 risk controls; the immutable audit ledger satisfies Article 12 record-keeping; the ESCALATE mechanism implements Article 14 human oversight for decisions that exceed autonomous authority.

How it works

Corules's policy runtime sits in the enforcement path between your AI agent and the action it wants to take. The agent sends a structured context payload to /v1/validate. Corules evaluates the context against a compiled CEL policy set and returns a structured decision — ALLOW, BLOCK, or ESCALATE — with a reason and audit ID.

Every decision is recorded in an immutable audit ledger. You can replay any past decision by providing the policy_set_version and the normalized input hash — the result will be identical.

Policy example

Policies are written in CEL (Common Expression Language). They are compiled once at publish time and evaluated in microseconds at request time.

// Article 14 human oversight: escalate decisions beyond autonomous authority
// Article 9 risk controls: block decisions violating risk thresholds
// Article 12 record-keeping: every decision logged immutably

// Example: credit decision with EU AI Act controls
credit_score >= params.min_credit_score                    // Article 9: risk control
  && dti_ratio <= params.max_dti                          // Article 9: risk control
  || (credit_score >= params.escalation_floor             // Article 14: human oversight
     && context.requires_human_review == false)

Frequently Asked Questions

Which AI systems are classified as high-risk under the EU AI Act?

High-risk systems include those used in: biometric identification, critical infrastructure, education and vocational training, employment and worker management, access to essential private and public services (credit, insurance), law enforcement, border management, and administration of justice. Most enterprise AI used in HR, lending, and compliance falls into this category.

What does 'human oversight' mean under Article 14?

Article 14 requires that high-risk AI systems be designed so humans can 'effectively oversee' them during operation. This includes the ability to understand system capabilities, monitor for anomalies, and override or correct decisions. Corules's ESCALATE mechanism implements this: decisions beyond autonomous authority are routed to human reviewers with full context.

When does the EU AI Act come into force?

The EU AI Act entered into force in August 2024. High-risk AI system requirements apply from August 2026. Organizations deploying high-risk systems should have governance controls operational before this date.

See it working in your environment

Start free — no credit card, no sales call. Evaluate up to 1,000 decisions per month.

Get started free