New York just told the biggest AI labs: if something goes seriously wrong, you don’t get to bury it. Under the RAISE Act, large “frontier AI” developers must publish a safety approach and report “critical harm” incidents to the state within 72 hours after determining one occurred—backed by civil penalties capped at $1M for a first violation and $3M for later violations, far below the bill’s earlier (June) penalty structure cited in subsequent reporting.
New York just told the biggest AI labs: if something goes seriously wrong, you don’t get to bury it. Under the RAISE Act, large “frontier AI” developers must publish a safety approach and report “critical harm” incidents to the state within 72 hours after determining one occurred—backed by civil penalties capped at $1M for a first violation and $3M for later violations, far below the bill’s earlier (June) penalty structure cited in subsequent reporting.
The state’s posture is widening beyond frontier-lab transparency: separate Hochul-signed measures targeted AI use in ads and post-mortem name/image/likeness protections in the film industry, reinforcing a broader New York strategy just as the White House ramps up pressure to preempt state AI rules before Congress acts.