Model Risk Management (MRM)
The systematic process of identifying, measuring, and controlling risks arising from the use of AI and machine learning models in business decisions.
What it means
Model risk management (MRM) originated in financial services as a framework for governing quantitative models used in credit, trading, and risk decisions. With the rise of generative AI in enterprise workflows, MRM principles now apply to LLMs and AI agents operating across business functions.
Key MRM disciplines include: model validation (ensuring the model performs as intended), model monitoring (detecting drift or degradation in production), and model governance (controlling which models can be deployed and under what conditions). The newest challenge is enforcement governance: ensuring that even a well-performing model cannot take actions that violate business policy.
The SR 11-7 guidance from the US Federal Reserve defines model risk management standards that financial institutions apply, and these principles are increasingly adopted across regulated industries.
Why enterprise executives need to understand this
For CISOs, compliance officers, and CIOs in regulated industries, MRM is not optional. Banking regulators, insurance supervisors, and healthcare authorities all have expectations around model governance. When AI agents are used in regulated decisions — credit approvals, insurance claims, healthcare eligibility — MRM requirements apply directly. The runtime enforcement layer is a critical MRM control.
How Corules implements this
Corules addresses the runtime enforcement component of MRM. While model validation and monitoring tools (like Fiddler, Evidently, or WhyLabs) ensure models perform well statistically, Corules ensures that model outputs cannot cause policy violations at execution time — regardless of model behavior. This creates a defense-in-depth approach: validate the model, then gate its outputs.
Frequently Asked Questions
Does Corules replace model validation tools?
No — model validation tools assess whether a model performs well statistically. Corules ensures that even a well-performing model cannot produce an action that violates policy. They address different risks and are complementary.
See Model Risk Management (MRM) in production
Corules implements every concept in this glossary. Join enterprise teams enforcing policy at runtime — no credit card required.
Request access