Skip to main content
← Back to templates

Risk Management System Plan

Comprehensive Art. 9-aligned risk management template for high-risk AI systems.

Category: Risk Management • Risk level: High

# Risk Management System Plan (Article 9 EU AI Act)

> **Legal basis:** Art. 9 (risk management system), with operational links to Art. 10-15 and Annex III contexts.
> **Objective:** Define an iterative, lifecycle-based process to identify, estimate, evaluate, mitigate, and monitor AI risks.

---

## 0) Governance and Scope

- Plan owner: [Name/Role]
- Version: [vX.Y]
- Effective date: [YYYY-MM-DD]
- Review cycle: [Monthly/Quarterly]
- System(s) covered: [List]
- Lifecycle phases covered: [Design/Development/Validation/Deployment/Post-market]
- Escalation authority: [Role]

**Guidance note:** Article 9 expects risk management to be continuous, not a one-time file.

---

## 1) Risk Context Definition

1. Intended purpose statement: [Purpose]
2. Foreseeable misuse scenarios: [List]
3. Affected groups: [Internal/external stakeholders]
4. Rights/safety domains potentially impacted: [Privacy/non-discrimination/safety/etc.]
5. Operational context assumptions: [Environment, dependencies]
6. Regulatory context assumptions: [Articles, standards]

**Common mistake:** starting with controls before defining use context and harm pathways.

---

## 2) Risk Identification Methodology

### 2.1 Identification inputs
- Incident history from similar systems
- Error and drift analyses
- User complaints and operational feedback
- Red-team/adversarial tests
- Domain expert workshops

### 2.2 Risk categories (minimum)
- Fundamental rights harms
- Safety harms
- Financial harms
- Performance and reliability harms
- Security and abuse harms
- Compliance and governance harms

### 2.3 Risk statement format
Use: "If [trigger/event], then [impact] may occur for [affected group], because [cause]."

---

## 3) Risk Estimation Framework

### 3.1 Scoring dimensions
- Likelihood (1-5)
- Severity (1-5)
- Detectability (1-5, optional)
- Exposure duration (short/medium/long)

### 3.2 Composite score formula
- Base score: Likelihood × Severity
- Adjustments: +modifier for low detectability or broad impact

### 3.3 Thresholds
- Low: 1-6
- Medium: 7-12
- High: 13-19
- Critical: 20-25

**Guidance note:** define thresholds before scoring to avoid outcome bias.

---

## 4) Risk Evaluation and Acceptability

1. Compare risk score against approved thresholds
2. Determine acceptability status:
   - Acceptable with monitoring
   - Conditionally acceptable with mitigation
   - Not acceptable (block release)
3. Document decision rationale and sign-off
4. Record dissent/escalation if reviewers disagree

---

## 5) Mitigation Planning

For each medium/high/critical risk, define:

- Control ID
- Control description
- Control type (preventive/detective/corrective)
- Owner
- Target date
- Validation method
- Expected residual risk

**Mitigation examples:**
- Add human override checkpoint
- Increase confidence threshold for auto-actions
- Add fairness monitoring by subgroup
- Remove sensitive features from model input
- Add refusal mode for out-of-scope prompts

---

## 6) Residual Risk Management

- Residual risk score after control implementation: [Score]
- Residual risk rationale: [Explanation]
- Residual risk acceptance authority: [Role]
- Acceptance date: [YYYY-MM-DD]
- Conditions for continued acceptance: [Conditions]

**Rule:** residual high/critical risk requires explicit executive-level sign-off.

---

## 7) Testing Strategy and Evidence

### 7.1 Pre-deployment testing
- Functional testing
- Robustness testing
- Data quality and bias testing
- Human factors/usability testing
- Security abuse-case testing

### 7.2 Post-deployment validation
- Drift checks frequency
- KPI thresholds and alerting
- Trigger events for retraining/recalibration

### 7.3 Evidence artifacts
- Test reports
- Calibration reports
- Monitoring dashboards
- Audit logs

---

## 8) Roles and Responsibilities (RACI)

| Activity | Product | ML/Engineering | Compliance | Legal | Ops |
|---|---|---|---|---|---|
| Risk identification | R | R | C | C | C |
| Risk scoring | C | R | R | C | C |
| Mitigation implementation | R | R | C | C | C |
| Residual risk approval | C | C | R | A | C |
| Monitoring & incidents | C | R | C | C | A |

---

## 9) Risk Register Template (Fillable)

| Risk ID | Risk Statement | Category | Likelihood | Severity | Score | Owner | Mitigation | Residual Score | Status |
|---|---|---|---|---|---|---|---|---|---|
| R-001 | [If..., then...] | [Category] | [1-5] | [1-5] | [Score] | [Role] | [Control] | [Score] | [Open/Closed] |
| R-002 | [If..., then...] | [Category] | [1-5] | [1-5] | [Score] | [Role] | [Control] | [Score] | [Open/Closed] |

---

## 10) Monitoring and Review Cadence

- Daily: critical incident alerts
- Weekly: KPI outlier review
- Monthly: risk register updates
- Quarterly: full risk re-evaluation
- Event-driven: major model/data/system change

### Mandatory re-open triggers
- Material performance drift
- New harmful incident pattern
- Substantial modification
- New regulatory guidance

---

## 11) Incident and Corrective Action Loop

- Incident intake channel: [Channel]
- Severity triage model: [Model]
- Root cause analysis method: [5 Whys/Fishbone]
- Corrective action owner: [Role]
- Verification of effectiveness: [Method]
- Closure criteria: [Criteria]

---

## 12) Metrics Dashboard (Minimum Set)

- False positive/negative rates by segment
- Override rate by human reviewers
- Drift index trend
- Incident count and severity trend
- Time to mitigation closure
- Residual high-risk count

---

## 13) Common Failure Patterns to Avoid

1. Treating risk management as legal-only documentation.
2. No explicit thresholds for acceptability.
3. Controls defined without owners/dates.
4. No linkage between incidents and risk register updates.
5. Ignoring foreseeable misuse scenarios.

---

## 14) Approval and Accountability

- Prepared by: [Name, Role, Date]
- Reviewed by (Engineering): [Name, Role, Date]
- Reviewed by (Compliance): [Name, Role, Date]
- Approved by: [Name, Role, Date]
- Next scheduled review: [YYYY-MM-DD]
Download Markdown