New York Passes Landmark AI Safety Transparency Bill
New York lawmakers approved the RAISE Act to enforce safety and transparency standards on leading AI developers like OpenAI, Google, and Anthropic. If signed into law, it would require detailed risk and incident reports from models trained with over $100 million in compute and empower the Attorney General to levy penalties up to $30 million for non-compliance.
On Thursday, the New York State Legislature passed the RAISE Act, aiming to prevent advanced AI models from causing disasters, injuries, or over $1 billion in damages. The bill targets frontier systems from companies like OpenAI, Google, and Anthropic.
If enacted, New York would set the nation’s first legally binding transparency and safety standards for models built with more than $100 million in compute. Labs must publish security assessments and report incidents—ranging from erratic AI behavior to unauthorized access—within strict timelines.
Overview of the RAISE Act
The RAISE Act outlines clear thresholds and reporting obligations designed to balance innovation with safety. Key elements include compute-based applicability, detailed risk disclosures, and civil penalties of up to $30 million for non-compliance.
- Applies to AI models trained with over $100 million in computing resources and available to New York residents.
- Requires publication of safety and security reports before model deployment.
- Mandates incident reporting for concerning behaviors or security breaches.
- Empowers the New York Attorney General to enforce penalties up to $30 million.
Industry Reaction
Supporters, including Nobel laureate Geoffrey Hinton and AI pioneer Yoshua Bengio, view the RAISE Act as a critical step in AI governance. Critics from Silicon Valley warn it could drive top labs to withdraw services from New York, although sponsors argue the law is narrowly tailored to avoid chilling startups or academia.
Challenges and Considerations
Implementing the RAISE Act will require AI labs to revamp their development lifecycles, integrate continuous monitoring, and establish governance teams. Smaller entities might face resource constraints, but sponsors designed carve-outs to protect early-stage research.
Looking Ahead
Governor Kathy Hochul now has the option to sign, amend, or veto the RAISE Act. As the deadline approaches, AI developers and New York businesses must prepare for new compliance demands. QuarkyByte’s analysis suggests that proactive safety reporting will become a competitive advantage rather than a regulatory burden.
Keep Reading
View AllMeta's $14.3B Scale Deal Values Startup at $29B
Meta’s $14.3B deal secures 49% of AI training data startup Scale, valuing it at $29B and issuing huge dividends to investors and employees.
Clay Raises $3B Valuation with Series C Led by Capital G
Clay secures a $3 billion Series C led by Capital G, following a $1.5 billion Sequoia-led tender offer. AI-driven sales automation scales fast.
Google Rolls Out Audio Overviews in Search Labs
Google now offers AI-generated Audio Overviews in Search Labs, powered by Gemini models for hands-free, accessible insights with source links and feedback.
AI Tools Built for Agencies That Move Fast.
Discover how QuarkyByte’s AI policy analytics can help global enterprises and research labs align with the RAISE Act’s transparency and reporting mandates. Use our risk assessment models to map potential compliance gaps and quantify regulatory exposure before Governor Hochul’s decision.