Capital Icon Minnesota Legislature

Office of the Revisor of Statutes

HF 4532

Introduction - 94th Legislature (2025 - 2026)

Posted on 03/23/2026 03:22 p.m.

KEY: stricken = removed, old language.
underscored = added, new language.
Line numbers 1.1 1.2 1.3 1.4 1.5
1.6 1.7 1.8
1.9 1.10 1.11 1.12 1.13 1.14 1.15 1.16 1.17 1.18 1.19 1.20 1.21 1.22 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 2.13 2.14 2.15 2.16 2.17 2.18 2.19 2.20 2.21 2.22 2.23 2.24 2.25 2.26 2.27 2.28 2.29 2.30 2.31
3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18 3.19 3.20 3.21 3.22 3.23 3.24 3.25 3.26 3.27 3.28 3.29 3.30 3.31 3.32 4.1 4.2 4.3 4.4 4.5 4.6 4.7
4.8 4.9 4.10 4.11 4.12 4.13 4.14 4.15 4.16

A bill for an act
relating to commerce; establishing artificial intelligence safety and disclosure
requirements; providing civil remedies; proposing coding for new law in Minnesota
Statutes, chapter 325M.

BE IT ENACTED BY THE LEGISLATURE OF THE STATE OF MINNESOTA:

Section 1.

new text begin [325M.39] TITLE.
new text end

new text begin Sections 325M.40 to 325M.42 may be cited as the "Responsible Artificial Intelligence
Safety and Education Act" or the "RAISE Act."
new text end

Sec. 2.

new text begin [325M.40] DEFINITIONS.
new text end

new text begin (a) For the purposes of sections 325M.40 to 325M.42, the following terms have the
meanings given.
new text end

new text begin (b) "Artificial intelligence" means a machine-based system that: (1) is able to, for a given
set of human-defined objectives, make predictions, recommendations, or decisions
influencing real or virtual environments; and (2) uses machine- and human-based inputs to
perceive real and virtual environments, abstract the perceptions into models through analysis
in an automated manner, and use model inference to formulate options for information or
action.
new text end

new text begin (c) "Artificial intelligence model" means an information system or component of an
information system that implements artificial intelligence technology and uses computational,
statistical, or machine-learning techniques to produce outputs from a given set of inputs.
new text end

new text begin (d) "Critical harm" means the death, serious physical injury, or mental injury of 25 or
more people, or at least $1,000,000 of damages to rights in money or property, caused or
materially enabled by a developer's use, storage, or release of an artificial intelligence model
that is the result of:
new text end

new text begin (1) the creation or use of a chemical, biological, radiological, or nuclear weapon; or
new text end

new text begin (2) conduct that with no meaningful human intervention would, if committed by a human,
constitute a crime that requires intent, recklessness, or gross negligence, or the solicitation
or aiding and abetting of a crime that requires intent, recklessness, or gross negligence.
new text end

new text begin (e) "Developer" means a person that has trained at least one artificial intelligence model.
new text end

new text begin (f) "Safety and security protocol" means a documented technical and organizational
protocol that:
new text end

new text begin (1) describes reasonable protections and procedures that, if successfully implemented,
appropriately reduce the risk of critical harm;
new text end

new text begin (2) describes reasonable administrative, technical, and physical cybersecurity protections
for artificial intelligence models within the developer's control that, if successfully
implemented, appropriately reduce the risk of unauthorized access to or misuse of the
artificial intelligence models leading to critical harm, including by sophisticated actors;
new text end

new text begin (3) describes in detail the testing procedure to evaluate whether the artificial intelligence
model (i) poses an unreasonable risk of critical harm, (ii) could evade the artificial
intelligence model's developer's or user's control, or (iii) could be misused, modified,
executed with increased computational resources, combined with other software, or used
to create another artificial intelligence model in a manner that increases the risk of critical
harm;
new text end

new text begin (4) enables the developer or third party to comply with the requirements of sections
325M.40 to 325M.42; and
new text end

new text begin (5) designates senior personnel to be responsible for ensuring compliance.
new text end

new text begin (g) "Safety incident" means a known incident of critical harm or one of the following
that provides demonstrable evidence of an increased risk of critical harm:
new text end

new text begin (1) an artificial intelligence model autonomously engages in behavior other than at the
request of a user;
new text end

new text begin (2) theft, misappropriation, malicious use, inadvertent release, unauthorized access, or
escape of an artificial intelligence model's model weights; or
new text end

new text begin (3) unauthorized use of an artificial intelligence model.
new text end

Sec. 3.

new text begin [325M.41] TRANSPARENCY REQUIREMENTS.
new text end

new text begin Subdivision 1. new text end

new text begin Developer requirements. new text end

new text begin Before deploying an artificial intelligence
model, a developer must:
new text end

new text begin (1) implement a written safety and security protocol;
new text end

new text begin (2) retain an unredacted copy of the safety and security protocol, including records and
dates of updates or revisions, for the entire period of time an artificial intelligence model
is deployed, plus five years;
new text end

new text begin (3) conspicuously publish a copy of the safety and security protocol with appropriate
redactions, and transmit a copy of the redacted safety and security protocol to the attorney
general;
new text end

new text begin (4) grant the attorney general access to the safety and security protocol with redactions
only to the extent required by federal law, if the attorney general requests access;
new text end

new text begin (5) record and retain information on the specific tests and test results used in any
assessment of the artificial intelligence model required under this section or by the developer's
safety and security protocol that provides sufficient detail for third parties to replicate the
testing procedure for the entire period of time an artificial intelligence model is deployed,
plus five years; and
new text end

new text begin (6) implement appropriate safeguards to prevent unreasonable risk of critical harm.
new text end

new text begin Subd. 2. new text end

new text begin Prohibition. new text end

new text begin A developer must not deploy an artificial intelligence model if
doing so creates an unreasonable risk of critical harm.
new text end

new text begin Subd. 3. new text end

new text begin Annual review. new text end

new text begin (a) A developer must (1) conduct an annual review of the
safety and security protocol required under this section to account for changes to the
capabilities of the artificial intelligence model and industry best practices; and (2) modify
the safety and security protocol.
new text end

new text begin (b) If a material modification is made to the safety and security protocol, the developer
must publish the safety and security protocol in the same manner required under subdivision
1, clause (3).
new text end

new text begin Subd. 4. new text end

new text begin Safety incident disclosure. new text end

new text begin A developer must disclose each safety incident
affecting the artificial intelligence model to the attorney general within 72 hours of the date
the developer learns of the safety incident or within 72 hours of the date the developer learns
sufficient facts to establish a reasonable belief that a safety incident has occurred. The
disclosure must include:
new text end

new text begin (1) the date of the safety incident;
new text end

new text begin (2) the reasons the safety incident qualifies as a safety incident as defined in this section;
and
new text end

new text begin (3) a short statement describing in plain language the safety incident.
new text end

new text begin Subd. 5. new text end

new text begin False or materially misleading statements. new text end

new text begin A developer must not knowingly
make false or materially misleading statements or omissions in or regarding documents
produced under this section.
new text end

Sec. 4.

new text begin [325M.42] ENFORCEMENT; PRIVATE RIGHT OF ACTION.
new text end

new text begin Subdivision 1. new text end

new text begin Attorney general. new text end

new text begin The attorney general may bring a civil action for a
violation of section 325M.41 and recover, based on severity of the violation:
new text end

new text begin (1) a civil penalty in an amount not exceeding $10,000,000 for a first violation and in
an amount not exceeding $30,000,000 for any subsequent violation; and
new text end

new text begin (2) injunctive or declaratory relief.
new text end

new text begin Subd. 2. new text end

new text begin Private right of action. new text end

new text begin A person injured by a violation of this section may
bring a civil action to recover damages, costs, and disbursements, including reasonable
attorney fees, and receive other equitable relief as determined by the court.
new text end