Capital Icon Minnesota Legislature

Office of the Revisor of Statutes

SF 4636

Introduction - 94th Legislature (2025 - 2026)

Posted on 03/24/2026 10:21 a.m.

KEY: stricken = removed, old language.
underscored = added, new language.
Line numbers 1.1 1.2 1.3 1.4 1.5 1.6
1.7 1.8 1.9 1.10 1.11 1.12 1.13 1.14 1.15 1.16 1.17 1.18 1.19 1.20 1.21 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11
2.12 2.13 2.14 2.15 2.16 2.17 2.18 2.19 2.20 2.21 2.22 2.23 2.24 2.25 2.26 2.27 2.28 2.29 2.30 2.31 2.32 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18 3.19 3.20 3.21 3.22 3.23 3.24 3.25 3.26 3.27 3.28 3.29 3.30 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12 4.13 4.14 4.15 4.16 4.17 4.18 4.19 4.20 4.21 4.22 4.23 4.24 4.25 4.26 4.27 4.28 4.29 4.30 4.31 5.1 5.2 5.3 5.4
5.5 5.6 5.7 5.8 5.9 5.10 5.11 5.12 5.13 5.14 5.15 5.16 5.17 5.18 5.19 5.20 5.21 5.22 5.23 5.24 5.25 5.26 5.27 5.28 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.9 6.10 6.11 6.12 6.13 6.14 6.15 6.16 6.17 6.18 6.19 6.20 6.21 6.22 6.23 6.24 6.25 6.26 6.27 6.28 6.29 6.30
7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8 7.9 7.10 7.11 7.12 7.13 7.14 7.15 7.16 7.17 7.18 7.19 7.20 7.21 7.22 7.23 7.24 7.25 7.26 7.27 7.28 7.29 7.30 7.31 8.1 8.2
8.3 8.4 8.5 8.6 8.7 8.8 8.9 8.10 8.11 8.12 8.13

A bill for an act
relating to commerce; establishing a license for artificial intelligence independent
verification organizations; establishing an advisory council; authorizing rulemaking;
requiring reports; proposing coding for new law in Minnesota Statutes, chapter
325M.

BE IT ENACTED BY THE LEGISLATURE OF THE STATE OF MINNESOTA:

Section 1.

new text begin [325M.50] DEFINITIONS.
new text end

new text begin Subdivision 1. new text end

new text begin Scope. new text end

new text begin For purposes of sections 325M.51 to 325M.54, the following
terms have the meanings given.
new text end

new text begin Subd. 2. new text end

new text begin Artificial intelligence application. new text end

new text begin "Artificial intelligence application" means
a software program or system that uses artificial intelligence to perform tasks that typically
require human intelligence.
new text end

new text begin Subd. 3. new text end

new text begin Artificial intelligence model. new text end

new text begin "Artificial intelligence model" means an
engineered or machine-based system that can, for explicit or implicit objectives, infer from
an input received how to generate output that can influence physical or virtual environments.
new text end

new text begin Subd. 4. new text end

new text begin Commissioner. new text end

new text begin "Commissioner" means the commissioner of commerce.
new text end

new text begin Subd. 5. new text end

new text begin Deployer. new text end

new text begin "Deployer" means a person that implements, integrates, or makes
operational an artificial intelligence model or artificial intelligence application in this state.
Deployer includes a person that makes an artificial intelligence model or artificial intelligence
application available for use by others in this state, whether directly or as part of a product
or service.
new text end

new text begin Subd. 6. new text end

new text begin Developer. new text end

new text begin "Developer" means a person that develops an artificial intelligence
model or artificial intelligence application that is deployed in this state.
new text end

new text begin Subd. 7. new text end

new text begin Independent verification organization or IVO. new text end

new text begin "Independent verification
organization" or "IVO" means an entity licensed by the commissioner pursuant to section
325M.51 to assess an artificial intelligence model's or artificial intelligence application's
adherence to standards reflecting best practices for the prevention of personal injury and
property damage.
new text end

new text begin Subd. 8. new text end

new text begin Security vendor. new text end

new text begin "Security vendor" means a third-party entity engaged by an
IVO or developer to evaluate the safety or security of an artificial intelligence model or
artificial intelligence application, including by using processes such as risk detection and
risk mitigation.
new text end

Sec. 2.

new text begin [325M.51] INDEPENDENT VERIFICATION ORGANIZATION LICENSURE
REQUIREMENT.
new text end

new text begin Subdivision 1. new text end

new text begin License required. new text end

new text begin An application for an independent verification
organization license must be made by filing with the commissioner the information, materials,
and forms specified in rules adopted by the commissioner.
new text end

new text begin Subd. 2. new text end

new text begin IVO proposed plan. new text end

new text begin An IVO must file a proposed plan with the IVO's
application. The proposed plan must include:
new text end

new text begin (1) the risk or risks that artificial intelligence models or artificial intelligence applications
must mitigate. For each risk identified, the plan must include:
new text end

new text begin (i) a proposed definition of acceptable levels of risk;
new text end

new text begin (ii) metrics that are measurable and can be used to determine whether the acceptable
level of risk defined by the IVO produces beneficial outcomes;
new text end

new text begin (iii) target levels for the metrics, including data sources the target levels are based on
and methods for measurement; and
new text end

new text begin (iv) a description of the evaluation and reporting protocol to determine whether verified
artificial intelligence models or artificial intelligence applications meet the outcome metrics
on an ongoing basis;
new text end

new text begin (2) proposed technical, operational, governance, and other mitigation requirements for
developers or deployers, including procedures for predevelopment and postdevelopment,
to ensure acceptable levels of risk, including:
new text end

new text begin (i) ongoing monitoring risks; and
new text end

new text begin (ii) ongoing assessment of mitigation efficacy;
new text end

new text begin (3) methodologies and sources used to evaluate the efficacy of mitigation requirements;
new text end

new text begin (4) benchmarks, technologies, and audit methodologies proposed to assess developer
and deployer adherence to mitigation requirements;
new text end

new text begin (5) an approach to assessing continued good standing of a verified artificial intelligence
model or artificial intelligence application, including reviewing and evaluating the developer's
or deployer's maintenance of artificial intelligence governance plans and policies, processes
for risk monitoring and mitigation, whistleblower protections, and training for employees
and third parties;
new text end

new text begin (6) disclosure requirements for developers or deployers related to detected risks, incident
reports, or material changes to risk profiles, including both risks detected before verification
and risks resulting from fine-tuning or modifying an artificial intelligence model or artificial
intelligence application after verification;
new text end

new text begin (7) procedures for prescribing and verifying implementation of corrective actions to
remedy a developer's or deployer's identified failure to:
new text end

new text begin (i) achieve an acceptable level of risk with respect to an artificial intelligence application
or artificial intelligence model;
new text end

new text begin (ii) comply with any other mitigation requirements promulgated by the applicant; and
new text end

new text begin (iii) comply with the developer's or deployer's artificial intelligence governance plan
and policy;
new text end

new text begin (8) standards and procedures for revoking verification for noncompliance with the
applicant's mitigation requirements, failure to achieve acceptable levels of risk, or
noncompliance with the developer's or deployer's artificial intelligence governance plans
and policies;
new text end

new text begin (9) whether the applicant proposes risk-specific verification and how plans are tailored
to the specific risk;
new text end

new text begin (10) coordination with federal and state authorities;
new text end

new text begin (11) personnel qualifications;
new text end

new text begin (12) governance policies, sources of funding, and policies ensuring independence; and
new text end

new text begin (13) any other information required by the commissioner.
new text end

new text begin Subd. 3. new text end

new text begin License determination. new text end

new text begin (a) The commissioner may license an applicant if the
commissioner finds:
new text end

new text begin (1) the applicant demonstrated independence from the artificial intelligence community;
and
new text end

new text begin (2) every element of the applicant's proposed plan is adequate to ensure that artificial
intelligence models or artificial intelligence applications verified pursuant to the plan mitigate
one or more risks to an acceptable level.
new text end

new text begin (b) If verification is proposed to a specific risk, the commissioner must evaluate the
plan's adequacy accordingly.
new text end

new text begin (c) If the commissioner finds that an applicant's plan does not adequately mitigate all of
the proposed risks, the applicant must be licensed to verify only those risks for which the
plan is deemed adequate.
new text end

new text begin (d) The license must specify:
new text end

new text begin (1) the risks the IVO is authorized to verify; and
new text end

new text begin (2) any markets the license applies to.
new text end

new text begin Subd. 4. new text end

new text begin License revocation. new text end

new text begin The commissioner must revoke an IVO license if the
commissioner determines:
new text end

new text begin (1) the IVO's plan is materially misleading or inaccurate;
new text end

new text begin (2) the IVO fails to adhere to the IVO's plan in a way that materially impairs the IVO's
responsibilities, including failure to adhere to the plan's procedures for ongoing monitoring
of verified artificial intelligence models or artificial intelligence applications and
implementation of corrective action;
new text end

new text begin (3) a material change compromises independence from the artificial intelligence industry;
new text end

new text begin (4) technological evolution renders methods obsolete for ensuring acceptable levels of
the risk the commissioner has designated the IVO to verify; or
new text end

new text begin (5) a verified artificial intelligence model or artificial intelligence application causes
material harm of the type the IVO defines an acceptable level of risk in order to prevent.
new text end

new text begin Subd. 5. new text end

new text begin Cure opportunity. new text end

new text begin The commissioner may allow an IVO to cure the basis for
revocation before terminating the license.
new text end

new text begin Subd. 6. new text end

new text begin Fees. new text end

new text begin The commissioner must establish reasonable application and renewal
fees sufficient to offset administrative costs.
new text end

new text begin Subd. 7. new text end

new text begin Verification not required. new text end

new text begin Nothing in this section requires an artificial
intelligence model or artificial intelligence application to seek IVO verification.
new text end

new text begin Subd. 8. new text end

new text begin Rulemaking. new text end

new text begin The commissioner may adopt rules necessary to implement
sections 325M.51 to 325M.54.
new text end

Sec. 3.

new text begin [325M.52] INDEPENDENT VERIFICATION ORGANIZATION
REQUIREMENTS.
new text end

new text begin Subdivision 1. new text end

new text begin Implementation. new text end

new text begin A licensed IVO must implement the IVO's approved
plan.
new text end

new text begin Subd. 2. new text end

new text begin Revocation of verification. new text end

new text begin An IVO must revoke verification if a developer
or deployer:
new text end

new text begin (1) fails to meet mitigation requirements;
new text end

new text begin (2) fails to cooperate with monitoring;
new text end

new text begin (3) violates governance policies; or
new text end

new text begin (4) fails to implement corrective actions.
new text end

new text begin Subd. 3. new text end

new text begin Plan modification. new text end

new text begin (a) An IVO may update or modify:
new text end

new text begin (1) technical and operational requirements;
new text end

new text begin (2) evaluation benchmarks;
new text end

new text begin (3) audit methodologies;
new text end

new text begin (4) governance plans;
new text end

new text begin (5) verification activities to enhance efficacy; and
new text end

new text begin (6) any other element of the IVO's plan in order to take advantage of improved
technology.
new text end

new text begin (b) An IVO may address previously discovered issues with the IVO's plan.
new text end

new text begin (c) A notice of material changes must be reported in writing to the commissioner. The
notice shall describe the changes, the rationale for the changes, and an explanation of how
the changes better enable the IVO to ensure the requisite level of mitigation of relevant
risks.
new text end

new text begin (d) Changes must take effect upon notification.
new text end

new text begin (e) The commissioner may, within six months after receiving notice of changes, request
additional information from the IVO regarding the changes or may issue a written notice
rejecting the changes in whole or in part. If the commissioner rejects the changes, the IVO
has 30 days to modify the IVO's plan to comply with the commissioner's determination and
to assess whether artificial intelligence models or applications assessed under the previous
plan must be reassessed.
new text end

new text begin Subd. 4. new text end

new text begin Annual reporting. new text end

new text begin (a) An IVO must submit an annual report to the
commissioner and chairs and ranking minority members of the legislative committees with
jurisdiction over artificial intelligence. The report must include:
new text end

new text begin (1) aggregated information on the capabilities of the artificial intelligence models and
artificial intelligence applications evaluated by the IVO, the observed societal risks and
benefits associated with those capabilities, and the potential societal risks and benefits
associated with those capabilities;
new text end

new text begin (2) the adequacy of evaluation resources, technical capabilities, and mitigation measures
to address observed and potential risks;
new text end

new text begin (3) aggregated results of verification assessments;
new text end

new text begin (4) aggregated and anonymized compliance with prescribed remediation;
new text end

new text begin (5) anonymized descriptions of any additional significant risk the IVO observed while
conducting assessments, even if the risk is not one the IVO is licensed to verify;
new text end

new text begin (6) a list of verified artificial intelligence models and artificial intelligence applications;
new text end

new text begin (7) a description of evaluation methods; and
new text end

new text begin (8) governance or funding changes that affect independence.
new text end

new text begin (b) IVOs may redact trade secrets, sensitive business information, personally identifiable
information, and other security-sensitive content.
new text end

new text begin (c) Documentation used in reports must be retained for ten years. An IVO must also
retain all documentation relating to the IVO's assessment and verification of artificial
intelligence models or artificial intelligence applications, including ongoing monitoring and
any subsequent corrective action, for ten years after the relevant activity.
new text end

new text begin (d) The commissioner must publish on the Department of Commerce website redacted
versions of reports issued by IVOs.
new text end

Sec. 4.

new text begin [325M.53] ARTIFICIAL INTELLIGENCE ADVISORY COUNCIL.
new text end

new text begin Subdivision 1. new text end

new text begin Establishment. new text end

new text begin (a) The council is established within the Department of
Commerce. The commissioner must determine the appropriate size of the council and appoint
all members.
new text end

new text begin (b) Membership must include at least one civil society representative, including but not
limited to nongovernmental organizations, educational and research institutions, public
policy institutes, or consumer and business advocacy organizations.
new text end

new text begin Subd. 2. new text end

new text begin Duties. new text end

new text begin (a) The commissioner must delegate powers, including licensing and
auditing, to the council.
new text end

new text begin (b) A member must:
new text end

new text begin (1) remain free of undue influence and refrain from any action that could compromise
the member's ability to carry out the member's responsibilities or otherwise cast doubt on
the member's ability to independently assess artificial intelligence models or artificial
intelligence applications;
new text end

new text begin (2) refrain from any action or occupation, gainful or not, that is incompatible with the
member's duties, including but not limited to employment by a developer or deployer of
artificial intelligence;
new text end

new text begin (3) refrain from owning or acquiring any equity or other interest, directly or indirectly,
in a company whose business consists in significant part of developing or deploying artificial
intelligence;
new text end

new text begin (4) observe a one-year postemployment restriction from artificial intelligence firms or
IVOs; and
new text end

new text begin (5) be qualified to assess IVO plans.
new text end

new text begin Subd. 3. new text end

new text begin Administration. new text end

new text begin (a) A member must not serve more than two consecutive
terms.
new text end

new text begin (b) A member must receive reimbursement for actual and necessary expenses incurred
in the discharge of duties. A member may also receive a salary for carrying out the member's
duties under this section.
new text end

new text begin (c) A member may be removed for inefficiency, neglect, or malfeasance.
new text end

new text begin (d) A majority of members constitutes a quorum and concurrence of a majority of a
quorum is sufficient for the council's determination.
new text end

new text begin (e) The council must keep a record of the council's proceedings, including all
considerations relating to the issuance, refusal, renewal, and revocation of an IVO license.
new text end

Sec. 5.

new text begin [325M.54] LIMITED LIABILITY; REBUTTABLE PRESUMPTION.
new text end

new text begin In a civil action asserting claims for personal injury or property damage caused by an
artificial intelligence model or artificial intelligence application, there is a rebuttable
presumption against liability if:
new text end

new text begin (1) the artificial intelligence model or artificial intelligence application in question was
verified by a licensed IVO at the time of the plaintiff's alleged injury;
new text end

new text begin (2) the plaintiff's alleged injury arose from a risk that the IVO was licensed to verify
and for which the IVO did verify the artificial intelligence model or artificial intelligence
application; and
new text end

new text begin (3) the artificial intelligence model or artificial intelligence application is within the
specified market segment, if any, for which the IVO was licensed to conduct verification.
new text end