Insurance AI Governance Framework

Framework

The rules, remediation guidance, controls, jurisdiction prompts, and review expectations this tool uses. The engine evaluates the rules below in order. The final tier is the most severe tier among all fired rules. Framework v1.2, updated 2026-04-26.

Tiers

  • Low Internal productivity use with documented data, not client-facing, not regulated.
  • Moderate AI assists on work that may touch clients or regulated topics with human review and logging.
  • High Output reaches clients directly, or materially influences a regulated decision, or is generative with external exposure.
  • Prohibited Do not deploy. AI decides a regulated matter without human review, training data is unknown with client or regulated exposure, or a High-risk use case has no logging.

Rules

  • R1Unknown training data with client or regulated exposureProhibited

    Condition: Training data source is Unknown AND (output reaches a client OR use case touches pricing, coverage, or claims).

    Rationale: Training data provenance is unknown and the use case touches clients or regulated decisions. Cannot proceed without a documented data source.

    Remediation:

    Data provenance is not documented for a client-facing or regulated use case.

    • Get a vendor data sheet, model card, or internal data inventory before production use.
    • Confirm whether protected-class, proxy, PHI, claim, policy, or external consumer data is used.
    • Restrict the use case to sandbox testing until provenance is documented.
    • NAIC Model Bulletin on the Use of AI Systems by Insurers (2023) - Data governance
    • EU AI Act Art. 10 - Data and data governance

    Kind: Base rule

  • R3aAI decides a regulated matter without human reviewProhibited

    Condition: Use case touches pricing, coverage, or claims AND AI makes the decision without human review.

    Rationale: Autonomous AI decisioning on a regulated matter fails meaningful human oversight requirements.

    Remediation:

    The AI makes a regulated insurance decision without meaningful human review.

    • Add pre-effect human review before the decision affects a policyholder, applicant, or claimant.
    • Create override, escalation, and exception handling logs.
    • Re-run the assessment after the workflow is changed from autonomous decisioning to reviewed decision support.
    • EU AI Act Art. 14 - Human oversight
    • NAIC Model Bulletin Section 4 - Governance and risk management
    • NYDFS Circular Letter No. 7 (2024) - Use of AI in insurance underwriting and pricing

    Kind: Base rule

  • R3bAI decides a regulated matter with human reviewHigh

    Condition: Use case touches pricing, coverage, or claims AND AI makes the decision subject to human review.

    Rationale: AI-driven regulated decisions require documented human review, audit trail, and bias monitoring.

    Remediation:

    The AI decides a regulated matter, even with review.

    • Document what the reviewer must check before accepting the AI output.
    • Sample accepted, overridden, and escalated outputs for bias, drift, and accuracy.
    • Define who can override the model and when escalation is required.
    • NAIC Model Bulletin Section 4 - Governance and risk management
    • Colorado SB21-169 - Restrictions on insurers' use of external consumer data and algorithms
    • NYDFS Circular Letter No. 7 (2024)

    Kind: Base rule

  • R3cAI recommends on a regulated matterHigh

    Condition: Use case touches pricing, coverage, or claims AND AI recommends while a human decides.

    Rationale: Recommendations materially influence regulated outcomes and require governance equivalent to decisioning use cases.

    Remediation:

    The AI materially influences a regulated matter.

    • Treat the recommendation as decision support, not informal productivity tooling.
    • Require reviewer attestation that the human made the final decision.
    • Monitor acceptance rate, overrides, and downstream outcomes.
    • NAIC Model Bulletin Section 4 - Governance and risk management
    • Colorado SB21-169

    Kind: Base rule

  • R4aGenerative output reaches a client without human reviewHigh

    Condition: Output reaches a client AND AI is generative AND no human reviews output before it is sent.

    Rationale: Unreviewed generative output to clients creates unbounded risk for hallucination, misstatement, and regulated disclosures.

    Remediation:

    Generative output can reach a client without human review.

    • Insert mandatory pre-send review for every external output.
    • Add approved templates, prohibited-content checks, and escalation paths.
    • Block direct send until review and logging are active.
    • NAIC Model Bulletin Section 4 - Consumer protection
    • NYDFS Circular Letter No. 7 (2024)

    Kind: Base rule

  • R4bGenerative output reaches a client with human reviewModerate

    Condition: Output reaches a client AND AI is generative AND a human reviews every output before it is sent.

    Rationale: Reviewed generative output reduces external risk but still requires disclosure, logging, and drift monitoring.

    Remediation:

    Reviewed generative output still creates external communication risk.

    • Keep approved templates and review criteria close to the workflow.
    • Sample outputs for misstatements, hallucinations, and disclosure issues.
    • Confirm whether AI-use disclosure is required by policy or jurisdiction.
    • NAIC Model Bulletin Section 4 - Consumer protection

    Kind: Base rule

  • R5Classifier output reaches a clientModerate

    Condition: Output reaches a client AND AI is classifying (not generative).

    Rationale: Classifier output affecting clients requires sampled review, performance monitoring, and clear escalation.

    Remediation:

    Classifier output reaches a client or policyholder.

    • Validate model performance before launch and after material changes.
    • Provide a path for human escalation and correction.
    • Monitor error rates and client-impacting exceptions.
    • NAIC Model Bulletin Section 4 - Testing and validation
    • NIST AI RMF - Measure function

    Kind: Base rule

  • R6External user audience without client-facing outputModerate

    Condition: External users operate the tool but AI output does not go directly to clients.

    Rationale: External operation expands the attack surface and creates supervision gaps even when output stays internal to the business.

    Remediation:

    External users operate the AI tool.

    • Confirm external-user access controls, training, and acceptable-use limits.
    • Log usage and review exceptions.
    • Define who supervises external use and how misuse is remediated.
    • NAIC Model Bulletin Section 4 - Third party and vendor oversight

    Kind: Base rule

  • R7Internal-only generative AI outside regulated domainsModerate

    Condition: AI is generative AND audience is internal only AND use case does not touch regulated decisions.

    Rationale: Internal generative AI still carries hallucination and drift risk; requires basic controls even without client or regulated exposure.

    Remediation:

    Internal generative AI can still create operational reliance risk.

    • Define acceptable uses and prohibited uses.
    • Require source checking before relying on generated content.
    • Log material uses where the output affects business records or decisions.
    • NIST AI RMF - Govern and Manage functions

    Kind: Base rule

  • R8Public training data with unknown curationModerate

    Condition: Training data or prompt source is public with unknown curation.

    Rationale: Public uncurated data introduces bias, IP, and privacy risk that must be evaluated before production use.

    Remediation:

    Public or unknown-curation data introduces bias, privacy, and IP risk.

    • Determine whether the data contains sensitive, copyrighted, or non-representative material.
    • Run bias, privacy, and IP review before production use.
    • Prefer documented vendor or internally curated data sources when possible.
    • EU AI Act Art. 10 - Data and data governance
    • NAIC Model Bulletin Section 4 - Data governance

    Kind: Base rule

  • R8aUnknown data provenance outside client or regulated exposureModerate

    Condition: Training data source is Unknown, even though output does not reach a client and the use case does not touch pricing, coverage, or claims.

    Rationale: Unknown data provenance should not be treated as Low risk. The team must document the source before relying on the use case in production.

    Remediation:

    Data provenance is unknown even without client or regulated exposure.

    • Document the source before relying on the system in production.
    • Limit use to non-material internal testing until the source is known.
    • Re-run the assessment once provenance is confirmed.
    • EU AI Act Art. 10 - Data and data governance
    • NAIC Model Bulletin on the Use of AI Systems by Insurers (2023) - Data governance

    Kind: Base rule

  • R2High-risk use case without loggingProhibited

    Condition: Base tier is High AND no logging of inputs and outputs is in place.

    Rationale: A High-risk use case without logging cannot be audited, investigated, or evidenced to regulators.

    Remediation:

    A high-risk use case cannot be audited because logging is missing.

    • Add input, output, model version, reviewer, and override logging.
    • Confirm retention period and access controls.
    • Block launch until logs can support investigation and regulatory review.
    • NYDFS Circular Letter No. 7 (2024) - Documentation and audit trails
    • EU AI Act Art. 12 - Record keeping

    Kind: Modifier (evaluated after base tier is set)

  • R9Moderate-risk use case without loggingModerate

    Condition: Base tier is Moderate AND no logging of inputs and outputs is in place.

    Rationale: Moderate tier requires logging. Tier is held at Moderate and deployment is blocked until logging is added.

    Remediation:

    A moderate-risk use case is missing required logging.

    • Add basic input, output, user, timestamp, and model-version logging.
    • Define who reviews exceptions.
    • Re-run the assessment after logging is enabled.
    • NYDFS Circular Letter No. 7 (2024)

    Kind: Modifier (evaluated after base tier is set)

Jurisdiction overlays

Jurisdiction prompts do not change the deterministic tier yet. They identify review questions that should be included in the assessment packet.

NAIC model bulletin state

Many state insurance departments now use the NAIC AI model bulletin as their governance baseline.

  • Keep an AI inventory, risk classification, controls, and audit evidence.
  • Be ready to explain vendor oversight, model monitoring, and consumer-impact controls.
  • Document who owns the AI system and who can approve launch or material changes.

Source: NAIC Model Bulletin on the Use of AI Systems by Insurers

New York

NYDFS guidance focuses on external consumer data, AI systems, underwriting, pricing, unfair discrimination, and proxy risk.

  • Confirm whether external consumer data or proxy variables are used.
  • Document why the data is not a protected-class proxy and does not create unfair discrimination.
  • Keep governance, testing, and documentation strong enough for regulatory inquiry.

Source: NYDFS Insurance Circular Letter No. 7 (2024)

Colorado

Colorado has one of the more prescriptive insurance AI regimes for unfair discrimination controls.

  • Identify external consumer data, algorithms, and predictive models in scope.
  • Prepare governance, testing, documentation, and accountability evidence.
  • Track whether the use case affects a line of insurance subject to current Colorado requirements.

Source: Colorado SB21-169 and Division of Insurance rules

California

California has warned insurers that AI, big data, and algorithmic methods must still comply with anti-discrimination rules.

  • Review marketing, rating, underwriting, claims, and fraud practices for unfair discrimination.
  • Document how model inputs and outputs are tested for disparate or unfair outcomes.
  • Confirm consumer-facing communications are accurate and reviewable.

Source: California Department of Insurance Bulletin 2022-5

European Union

The EU AI Act treats certain life and health insurance risk assessment and pricing systems as high-risk.

  • Confirm whether natural persons are affected in life or health insurance.
  • Prepare high-risk AI evidence: risk management, data governance, logging, human oversight, and documentation.
  • Do not assume a US-only tier maps cleanly to EU compliance status.

Source: EU AI Act Annex III

Not sure

Jurisdiction is unresolved, so the assessment should preserve the state and international questions for review.

  • Identify where applicants, policyholders, claimants, and users are located.
  • Confirm which state insurance departments or international regimes may apply.
  • Do not treat a generic US review as complete until jurisdiction is confirmed.

Source: Jurisdiction intake gap

Controls matrix

ControlLowModerateHighProhibited
Logging of inputs and outputsRecommendedRequiredRequiredNot applicable
Human-in-the-loop reviewNot requiredRequiredRequiredNot applicable
Model version controlRecommendedRequiredRequiredNot applicable
Escalation path (named role + SLA)RecommendedRequiredRequiredNot applicable
Data provenance documentedRequiredRequiredRequiredNot applicable
Bias and drift monitoringNot requiredRequired - QuarterlyRequired - MonthlyNot applicable
Regulatory counsel sign-offNot requiredRecommendedRequiredNot applicable
Client disclosure of AI useNot requiredNot requiredRequiredNot applicable

Review expectations

TierWhoWhenWhat
LowLine managerBefore launch, then annually.Confirm description is accurate and baseline controls are in place.
ModerateCompliance lead and line manager.Before launch. Output sampling monthly.Verify logging, sample outputs for accuracy, confirm escalations are handled end-to-end.
HighCompliance lead, legal, and an executive sponsor.Before launch. Weekly output review for the first 90 days, then monthly.Review every output pre-send during the first 90 days. Monitor bias and drift metrics. Confirm regulatory posture and client disclosure.
ProhibitedDo not deploy.Not applicable.Address the blocking rule(s) below and re-run the assessment.