All News

New York Passes Landmark AI Safety Transparency Bill

New York lawmakers approved the RAISE Act to enforce safety and transparency standards on leading AI developers like OpenAI, Google, and Anthropic. If signed into law, it would require detailed risk and incident reports from models trained with over $100 million in compute and empower the Attorney General to levy penalties up to $30 million for non-compliance.

Published June 13, 2025 at 07:08 PM EDT in Artificial Intelligence (AI)

On Thursday, the New York State Legislature passed the RAISE Act, aiming to prevent advanced AI models from causing disasters, injuries, or over $1 billion in damages. The bill targets frontier systems from companies like OpenAI, Google, and Anthropic.

If enacted, New York would set the nation’s first legally binding transparency and safety standards for models built with more than $100 million in compute. Labs must publish security assessments and report incidents—ranging from erratic AI behavior to unauthorized access—within strict timelines.

Overview of the RAISE Act

The RAISE Act outlines clear thresholds and reporting obligations designed to balance innovation with safety. Key elements include compute-based applicability, detailed risk disclosures, and civil penalties of up to $30 million for non-compliance.

  • Applies to AI models trained with over $100 million in computing resources and available to New York residents.
  • Requires publication of safety and security reports before model deployment.
  • Mandates incident reporting for concerning behaviors or security breaches.
  • Empowers the New York Attorney General to enforce penalties up to $30 million.

Industry Reaction

Supporters, including Nobel laureate Geoffrey Hinton and AI pioneer Yoshua Bengio, view the RAISE Act as a critical step in AI governance. Critics from Silicon Valley warn it could drive top labs to withdraw services from New York, although sponsors argue the law is narrowly tailored to avoid chilling startups or academia.

Challenges and Considerations

Implementing the RAISE Act will require AI labs to revamp their development lifecycles, integrate continuous monitoring, and establish governance teams. Smaller entities might face resource constraints, but sponsors designed carve-outs to protect early-stage research.

Looking Ahead

Governor Kathy Hochul now has the option to sign, amend, or veto the RAISE Act. As the deadline approaches, AI developers and New York businesses must prepare for new compliance demands. QuarkyByte’s analysis suggests that proactive safety reporting will become a competitive advantage rather than a regulatory burden.

Keep Reading

View All
The Future of Business is AI

AI Tools Built for Agencies That Move Fast.

Discover how QuarkyByte’s AI policy analytics can help global enterprises and research labs align with the RAISE Act’s transparency and reporting mandates. Use our risk assessment models to map potential compliance gaps and quantify regulatory exposure before Governor Hochul’s decision.