
New York Implements a Bill to Prevent AI-fueled Disasters

New York state legislators approved a bill that seeks to prevent leading AI models from OpenAI, Google, and Anthropic from contributing to catastrophic events, including the fatalities or injuries of over 100 people, or exceeding $1 billion in damages.
The introduction of the RAISE Act signifies a victory for the AI safety movement, which has seen a decline in influence in recent years as Silicon Valley and the Trump administration have focused on rapid development and innovation.
Safety proponents, including Nobel laureate Geoffrey Hinton and AI research innovator Yoshua Bengio, have endorsed the RAISE Act. If enacted, this bill would create the first legally mandated transparency standards for advanced AI laboratories in the US.
The RAISE Act shares some provisions and objectives with California’s contentious AI safety bill, SB 1047, which was ultimately rejected.
If it becomes law, New York’s AI safety measure would mandate that the largest AI laboratories in the world produce detailed safety and security reports regarding their advanced AI models. The legislation also requires AI labs to report safety incidents, such as concerning behaviors of AI models or instances of unauthorized parties stealing an AI model, if they occur. Should tech companies neglect these standards, the RAISE Act empowers New York’s attorney general to impose civil penalties of up to $30 million.
The RAISE Act aims to specifically regulate the biggest companies globally—whether headquartered in California (like OpenAI and Google) or China (like DeepSeek and Alibaba).
The transparency mandates target firms whose AI models were trained with over $100 million in computational resources (likely exceeding any current AI model) and are available to residents of New York.
While echoing certain aspects of SB 1047, the RAISE Act was crafted to counter criticisms aimed at prior AI safety measures, according to Nathan Calvin, the vice president of State Affairs and general counsel at Encode, who contributed to this bill as well as SB 1047. Notably, the RAISE Act does not necessitate that AI model creators incorporate a “kill switch” within their models, nor does it hold companies accountable for significant harms when post-training frontier AI models are developed.
Despite this, Silicon Valley has strongly opposed New York’s AI safety legislation, according to New York state Assembly member and RAISE Act co-sponsor Alex Bores, who shared this information with TechCrunch. Bores referred to the industry's resistance as expected but contended that the RAISE Act would not impede the innovation capacity of tech firms in any manner.
Also Read: Louis Vuitton: Rags to Exquisite World-Class Bags
Anthropic, the AI lab focused on safety that previously advocated for federal transparency standards for AI firms earlier this month, has not taken an official position on the bill, stated co-founder Jack Clark in a Friday update on X. Nonetheless, Clark voiced some concerns about the broad scope of the RAISE Act, suggesting that it could pose a risk to “smaller companies.”