Pull to refresh
Logo
Daily Brief
Following
Why Sign Up
New York’s RAISE Act turns frontier AI safety into a 72-hour countdown

New York’s RAISE Act turns frontier AI safety into a 72-hour countdown

Rule Changes
By Newzino Staff | |

The 72-hour reporting clock survives—but the penalty teeth are smaller, as Washington leans harder on preemption

December 21st, 2025: Reuters write-up spotlights how RAISE penalties were scaled down from earlier versions

Overview

New York just told the biggest AI labs: if something goes seriously wrong, you don’t get to bury it. Under the RAISE Act, large “frontier AI” developers must publish a safety approach and report “critical harm” incidents to the state within 72 hours after determining one occurred—backed by civil penalties capped at $1M for a first violation and $3M for later violations, far below the bill’s earlier (June) penalty structure cited in subsequent reporting.

Key Indicators

72 hours
Incident reporting window
Companies must notify the state quickly once they determine a qualifying incident occurred.
$1M / $3M
Civil penalty caps (first vs. subsequent violations)
Attorney General enforcement includes penalties for failing to report or making false statements.
1 new office
New DFS oversight unit
A new office inside New York’s financial regulator becomes the AI transparency referee.
2027-01-01
Reported compliance start date
Coverage indicates key obligations phase in starting January 1, 2027.
$500M+
Reported “largest company” threshold
Major coverage describes applicability tied to very large-company scale.

Interactive

Exploring all sides of a story is often best achieved with Play.

Ever wondered what historical figures would say about today's headlines?

Sign up to generate historical perspectives on this story.

Sign Up

Debate Arena

Two rounds, two personas, one winner. You set the crossfire.

People Involved

Organizations Involved

Timeline

  1. Reuters write-up spotlights how RAISE penalties were scaled down from earlier versions

    Media

    A Reuters story syndicated by Engadget highlighted that the final RAISE Act’s penalty caps ($1M/$3M) are substantially lower than the higher fine levels described for earlier versions, while also noting Hochul’s earlier December signings aimed at AI transparency in advertising and post-mortem likeness consent.

  2. Wall Street Journal frames it as defiance

    Media

    WSJ spotlights New York’s move despite the federal push for preemption.

  3. Hochul signs the RAISE Act

    Legal

    New York enacts frontier-AI safety frameworks and 72-hour incident reporting.

  4. White House tries to freeze the states

    Rule Changes

    Trump signs an order aimed at blocking restrictive state AI laws.

  5. Parents push Hochul to sign

    Public Pressure

    A parent-led coalition urges Hochul to enact RAISE without weakening edits.

  6. RAISE reaches Hochul’s desk

    Legal

    Legislature delivers the enrolled bill to the governor.

  7. California sets the template

    Legal

    Newsom signs SB 53, a frontier-AI transparency and incident-reporting law.

  8. Albany passes RAISE

    Legal

    New York Senate and Assembly pass RAISE after amendments.

  9. RAISE lands in the Senate

    Legal

    Senator Andrew Gounardes introduces the Senate companion bill.

  10. RAISE lands in the Assembly

    Legal

    Assemblymember Alex Bores introduces the RAISE Act’s Assembly bill.

  11. California veto sparks the “what’s feasible” debate

    Legal

    Newsom vetoes SB 1047, rejecting the toughest frontier-AI safety plan.

Scenarios

1

“The New Standard”: Big AI labs quietly comply nationwide

Discussed by: The Wall Street Journal; Axios; California and New York officials citing “unified benchmarks”

DFS stands up the new oversight office, companies publish frameworks, and incident reporting becomes routine—because the biggest developers decide it’s cheaper to standardize than to fight. Other tech-heavy states copy the model, and “frontier-AI transparency” becomes the default expectation for top-tier labs even where it’s not legally required.

2

“Preemption Showdown”: DOJ sues, states counter-sue, courts decide who’s boss

Discussed by: The Washington Post; The Wall Street Journal; state-level officials openly challenging the executive order

The federal government escalates from threats to litigation, arguing state AI laws obstruct national policy. New York and allies respond with federalism arguments and injunction requests. The practical result is limbo: companies prepare compliance plans while waiting to see whether courts uphold state authority or bless federal preemption via executive power.

3

“RAISE, But Softer”: Reporting happens, but the law becomes mostly a transparency paperwork regime

Discussed by: The American Prospect; industry arguments about feasibility and trade secret exposure

Industry pressure shifts from “kill the bill” to “minimize the impact.” Regulators interpret obligations narrowly, redactions expand, and incident reporting becomes highly standardized and low-detail. New York still gets a reporting pipeline and leverage, but the public learns less than advocates hoped—and the biggest wins move to quieter enforcement settlements.

4

“Congress Finally Moves”: A federal framework overrides the state patchwork

Discussed by: Major tech-industry lobbying coalitions; national political coverage framing patchwork risk

After enough states adopt divergent AI rules—and enough companies complain—Congress passes a federal framework that partially preempts states while borrowing the core idea: mandatory safety frameworks and incident reporting for frontier developers. New York’s law becomes the prototype that helped write the national statute, even if parts are superseded.

Historical Context

NYDFS Cybersecurity Regulation (23 NYCRR 500)

2017–present

What Happened

New York’s financial regulator imposed detailed cybersecurity obligations and incident reporting requirements on covered entities. Even outside New York, many firms treated it as a de facto baseline because compliance programs don’t scale well state-by-state.

Outcome

Short Term

Companies built formal reporting and governance processes to avoid NYDFS penalties.

Long Term

New York proved a state regulator can set national compliance norms in practice.

Why It's Relevant Today

RAISE repeats the same playbook: make reporting mandatory, then make it enforceable.

GDPR and the “Brussels effect” in privacy

2016–2018 (adoption to enforcement)

What Happened

Europe passed a privacy regime with strong disclosure and breach notification rules. Global companies often chose worldwide compliance rather than running separate systems by geography.

Outcome

Short Term

Companies rewired privacy operations, contracts, and incident response for GDPR timelines.

Long Term

Privacy expectations shifted globally, even in places without identical laws.

Why It's Relevant Today

New York and California are trying to create an American version of that compliance gravity.

California Consumer Privacy Act (CCPA)

2018–2020 (passage to early enforcement)

What Happened

California passed a sweeping consumer privacy law that forced national brands to update disclosures and data practices. Many firms rolled out CCPA-style controls nationally to simplify operations.

Outcome

Short Term

National compliance teams treated California as the design constraint.

Long Term

State policy became the launchpad for broader U.S. privacy regulation.

Why It's Relevant Today

RAISE aims for the same dynamic—one big state sets the rulebook everyone else follows.

Sources

(13)