R1Unknown training data with client or regulated exposureProhibited
Condition: Training data source is Unknown AND (output reaches a client OR use case touches pricing, coverage, or claims).
Rationale: Training data provenance is unknown and the use case touches clients or regulated decisions. Cannot proceed without a documented data source.
Remediation:
Data provenance is not documented for a client-facing or regulated use case.
- Get a vendor data sheet, model card, or internal data inventory before production use.
- Confirm whether protected-class, proxy, PHI, claim, policy, or external consumer data is used.
- Restrict the use case to sandbox testing until provenance is documented.
- NAIC Model Bulletin on the Use of AI Systems by Insurers (2023) - Data governance
- EU AI Act Art. 10 - Data and data governance
Kind: Base rule
R3aAI decides a regulated matter without human reviewProhibited
Condition: Use case touches pricing, coverage, or claims AND AI makes the decision without human review.
Rationale: Autonomous AI decisioning on a regulated matter fails meaningful human oversight requirements.
Remediation:
The AI makes a regulated insurance decision without meaningful human review.
- Add pre-effect human review before the decision affects a policyholder, applicant, or claimant.
- Create override, escalation, and exception handling logs.
- Re-run the assessment after the workflow is changed from autonomous decisioning to reviewed decision support.
- EU AI Act Art. 14 - Human oversight
- NAIC Model Bulletin Section 4 - Governance and risk management
- NYDFS Circular Letter No. 7 (2024) - Use of AI in insurance underwriting and pricing
Kind: Base rule
R3bAI decides a regulated matter with human reviewHigh
Condition: Use case touches pricing, coverage, or claims AND AI makes the decision subject to human review.
Rationale: AI-driven regulated decisions require documented human review, audit trail, and bias monitoring.
Remediation:
The AI decides a regulated matter, even with review.
- Document what the reviewer must check before accepting the AI output.
- Sample accepted, overridden, and escalated outputs for bias, drift, and accuracy.
- Define who can override the model and when escalation is required.
- NAIC Model Bulletin Section 4 - Governance and risk management
- Colorado SB21-169 - Restrictions on insurers' use of external consumer data and algorithms
- NYDFS Circular Letter No. 7 (2024)
Kind: Base rule
R3cAI recommends on a regulated matterHigh
Condition: Use case touches pricing, coverage, or claims AND AI recommends while a human decides.
Rationale: Recommendations materially influence regulated outcomes and require governance equivalent to decisioning use cases.
Remediation:
The AI materially influences a regulated matter.
- Treat the recommendation as decision support, not informal productivity tooling.
- Require reviewer attestation that the human made the final decision.
- Monitor acceptance rate, overrides, and downstream outcomes.
- NAIC Model Bulletin Section 4 - Governance and risk management
- Colorado SB21-169
Kind: Base rule
R4aGenerative output reaches a client without human reviewHigh
Condition: Output reaches a client AND AI is generative AND no human reviews output before it is sent.
Rationale: Unreviewed generative output to clients creates unbounded risk for hallucination, misstatement, and regulated disclosures.
Remediation:
Generative output can reach a client without human review.
- Insert mandatory pre-send review for every external output.
- Add approved templates, prohibited-content checks, and escalation paths.
- Block direct send until review and logging are active.
- NAIC Model Bulletin Section 4 - Consumer protection
- NYDFS Circular Letter No. 7 (2024)
Kind: Base rule
R4bGenerative output reaches a client with human reviewModerate
Condition: Output reaches a client AND AI is generative AND a human reviews every output before it is sent.
Rationale: Reviewed generative output reduces external risk but still requires disclosure, logging, and drift monitoring.
Remediation:
Reviewed generative output still creates external communication risk.
- Keep approved templates and review criteria close to the workflow.
- Sample outputs for misstatements, hallucinations, and disclosure issues.
- Confirm whether AI-use disclosure is required by policy or jurisdiction.
- NAIC Model Bulletin Section 4 - Consumer protection
Kind: Base rule
R5Classifier output reaches a clientModerate
Condition: Output reaches a client AND AI is classifying (not generative).
Rationale: Classifier output affecting clients requires sampled review, performance monitoring, and clear escalation.
Remediation:
Classifier output reaches a client or policyholder.
- Validate model performance before launch and after material changes.
- Provide a path for human escalation and correction.
- Monitor error rates and client-impacting exceptions.
- NAIC Model Bulletin Section 4 - Testing and validation
- NIST AI RMF - Measure function
Kind: Base rule
R6External user audience without client-facing outputModerate
Condition: External users operate the tool but AI output does not go directly to clients.
Rationale: External operation expands the attack surface and creates supervision gaps even when output stays internal to the business.
Remediation:
External users operate the AI tool.
- Confirm external-user access controls, training, and acceptable-use limits.
- Log usage and review exceptions.
- Define who supervises external use and how misuse is remediated.
- NAIC Model Bulletin Section 4 - Third party and vendor oversight
Kind: Base rule
R7Internal-only generative AI outside regulated domainsModerate
Condition: AI is generative AND audience is internal only AND use case does not touch regulated decisions.
Rationale: Internal generative AI still carries hallucination and drift risk; requires basic controls even without client or regulated exposure.
Remediation:
Internal generative AI can still create operational reliance risk.
- Define acceptable uses and prohibited uses.
- Require source checking before relying on generated content.
- Log material uses where the output affects business records or decisions.
- NIST AI RMF - Govern and Manage functions
Kind: Base rule
R8Public training data with unknown curationModerate
Condition: Training data or prompt source is public with unknown curation.
Rationale: Public uncurated data introduces bias, IP, and privacy risk that must be evaluated before production use.
Remediation:
Public or unknown-curation data introduces bias, privacy, and IP risk.
- Determine whether the data contains sensitive, copyrighted, or non-representative material.
- Run bias, privacy, and IP review before production use.
- Prefer documented vendor or internally curated data sources when possible.
- EU AI Act Art. 10 - Data and data governance
- NAIC Model Bulletin Section 4 - Data governance
Kind: Base rule
R8aUnknown data provenance outside client or regulated exposureModerate
Condition: Training data source is Unknown, even though output does not reach a client and the use case does not touch pricing, coverage, or claims.
Rationale: Unknown data provenance should not be treated as Low risk. The team must document the source before relying on the use case in production.
Remediation:
Data provenance is unknown even without client or regulated exposure.
- Document the source before relying on the system in production.
- Limit use to non-material internal testing until the source is known.
- Re-run the assessment once provenance is confirmed.
- EU AI Act Art. 10 - Data and data governance
- NAIC Model Bulletin on the Use of AI Systems by Insurers (2023) - Data governance
Kind: Base rule
R2High-risk use case without loggingProhibited
Condition: Base tier is High AND no logging of inputs and outputs is in place.
Rationale: A High-risk use case without logging cannot be audited, investigated, or evidenced to regulators.
Remediation:
A high-risk use case cannot be audited because logging is missing.
- Add input, output, model version, reviewer, and override logging.
- Confirm retention period and access controls.
- Block launch until logs can support investigation and regulatory review.
- NYDFS Circular Letter No. 7 (2024) - Documentation and audit trails
- EU AI Act Art. 12 - Record keeping
Kind: Modifier (evaluated after base tier is set)
R9Moderate-risk use case without loggingModerate
Condition: Base tier is Moderate AND no logging of inputs and outputs is in place.
Rationale: Moderate tier requires logging. Tier is held at Moderate and deployment is blocked until logging is added.
Remediation:
A moderate-risk use case is missing required logging.
- Add basic input, output, user, timestamp, and model-version logging.
- Define who reviews exceptions.
- Re-run the assessment after logging is enabled.
- NYDFS Circular Letter No. 7 (2024)
Kind: Modifier (evaluated after base tier is set)