EU AI Act High-Risk AI System Compliance Requirements

Legal, compliance, and technology teams identifying what operational controls EU AI Act Article 9–15 requires for high-risk AI systems.

El problema

The EU AI Act classifies AI systems used in employment, credit, insurance, education, and essential services as high-risk. High-risk systems must satisfy requirements in Articles 9–15, including: risk management (Article 9), data governance (Article 10), technical documentation (Article 11), record-keeping (Article 12), transparency (Article 13), human oversight (Article 14), and accuracy/robustness (Article 15). The enforcement gap for most enterprises is Articles 9, 12, and 14 — runtime risk controls, decision records, and human oversight mechanisms. Corules addresses all three operationally: CEL policy enforcement implements Article 9 risk controls; the immutable audit ledger satisfies Article 12 record-keeping; the ESCALATE mechanism implements Article 14 human oversight for decisions that exceed autonomous authority.

Cómo lo resuelve Corules

Corules's policy runtime evaluates structured context against compiled CEL expressions — returning ALLOW, BLOCK, or ESCALATE with a reason and audit ID.

Ejemplo de política

// Article 14 human oversight: escalate decisions beyond autonomous authority
// Article 9 risk controls: block decisions violating risk thresholds
// Article 12 record-keeping: every decision logged immutably

// Example: credit decision with EU AI Act controls
credit_score >= params.min_credit_score                    // Article 9: risk control
  && dti_ratio <= params.max_dti                          // Article 9: risk control
  || (credit_score >= params.escalation_floor             // Article 14: human oversight
     && context.requires_human_review == false)

Frequently Asked Questions

Which AI systems are classified as high-risk under the EU AI Act?

High-risk systems include those used in: biometric identification, critical infrastructure, education and vocational training, employment and worker management, access to essential private and public services (credit, insurance), law enforcement, border management, and administration of justice. Most enterprise AI used in HR, lending, and compliance falls into this category.

What does 'human oversight' mean under Article 14?

Article 14 requires that high-risk AI systems be designed so humans can 'effectively oversee' them during operation. This includes the ability to understand system capabilities, monitor for anomalies, and override or correct decisions. Corules's ESCALATE mechanism implements this: decisions beyond autonomous authority are routed to human reviewers with full context.

When does the EU AI Act come into force?

The EU AI Act entered into force in August 2024. High-risk AI system requirements apply from August 2026. Organizations deploying high-risk systems should have governance controls operational before this date.

Deja de limitar la IA a sugerencias.

Comenzar gratis