Fair Lending-Compliant AI Credit Decisions
Lenders must make fast credit decisions while meeting ECOA/FCRA adverse action notice requirements and proving decisions are not discriminatory.
Das Problem
When an AI model scores a credit application, Corules validates the decision against underwriting policy before it is rendered. Approved applications proceed. Denied applications automatically generate specific adverse action reasons citing the exact factors that failed (debt-to-income ratio, credit score threshold, employment verification). Every decision is replayable for regulatory examination.
So löst Corules es
BLOCK: DTI ratio 52% exceeds max_dti = 43%. Adverse action reason: debt-to-income ratio.
Richtlinienbeispiel
// Credit decision policy (CEL)
context.credit_score >= params.min_credit_score
&& context.dti_ratio <= params.max_dti
&& context.employment_verified == true
&& context.loan_amount <= params.max_loan_by_tier[context.applicant_tier]Integrationsoptionen
Frequently Asked Questions
How does this help with ECOA adverse action requirements?
Every BLOCK decision returns specific reason codes that map directly to required adverse action notice language. The reasons cite the actual policy factor that failed, satisfying regulatory requirements.
Can decisions be replayed for regulatory examination?
Yes. Every decision stores a normalized input hash and policy version. The exact decision can be reproduced at any future point for audit or dispute.
Does Corules detect discriminatory patterns?
Corules enforces policy deterministically. Disparate impact analysis (segmenting outcomes by protected class) is a separate reporting layer built on audit log data.