The AI developers are concerned about the California Bill, which mandates replacing existing models with written safety plans.
The California Senate is edging closer to enacting a controversial bill regulating AI model development and training. Most parties in the AI space express their concern closer to the law, and many involved in the sector aren’t happy.
The California Senate Bill 1047 obligates artificial intelligence (AI) companies handling models that cost over $100 million to deploy robust safety frameworks into them. The tech industry, with businesses domiciled in Silicon Valley, opened debate regarding the bill’s impact on their existing work.
California Senate Bill Obligates Kill Switch in AI Models
The SB 1047 directs the AI developers to integrate a kill switch besides undertaking mandatory audits for safety compliance. The requirement extends a prohibition preventing AI developers from producing, using, or distributing models that harbor potential danger.
Behind the Grok AI platform, Elon Musk, criticized for facilitating disinformation, expressed support for the SB 1047 bill. The Tesla chief considers the proposal a tough call that will upset individuals.
In a Monday post on X, California must consider it timely to enact the SB 1047 AI safety bill. The tech entrepreneur reiterates his previous efforts to push for greater regulatory oversight. Notably, he has advocated for AI regulation for over a decade.
The California SB 1047 bill is witnessing opposition from several parties. In particular, the Microsoft-backed OpenAI that Musk co-founded vehemently opposes the bill.
The Sam Altman-led OpenAI penned a response letter to the bill author, Scott Wiener. The San Francisco-based company behind the popular ChatGPt model claims that the SB 1047 bill will ultimately hurt Silicon Valley’s ability to become the global AI leader.
Andrew Ng, who previously headed the Deep Brain – a deep learning AI project for Google, criticized the bill. The executive decried the bill in June, indicating that it made builders of AI models liable if other parties utilized their models.
Ng expressed deep concern regarding California’s proposed law SB-1047 in a tweet. The executive considers the bill long and complex, particularly as many parts mandate safety assessments and shutdown capability for the AI models.
The SB 1047 becoming law will mandate the AI developers to comply with five key requirements. The bill asks the developers to ensure they quickly shut down the model and have a security plan besides the written safety guidelines.
Retain Safety Plan for AI Models
The SB 1047 requires the developers to have an unedited copy of the safety plan sustained throughout the model availability. Also, the plan should be available for an additional five years and have records for all updates.
Beginning January next year, the law directs developers to engage independent auditors annually to ensure compliance with the provisions. The rule directs the developers to have the full audit report for a duration similar to the safety.
The SB 1047 obligates developers to provide the Attorney General access to the entity’s safety plan. The responsibility extends to providing audit reports whenever requested to furnish the Attorney General.
The new law prohibits the developers from ever releasing or utilizing a model. The restrictions affect commercial and public use models when they pose a significant risk of hurting individuals.
The California bill witnessed success in the committee stage, allowing the assembly members to vote this week. The Senate passed the bill with strong support, as seen in May.
The approval of the Bill by the Assembly will enable the bill to be tabled before Governor Gavin Newsom. The governor can veto or approve the proposal to law on September 30.
Crackdown Against Deepfake
The California bill comes after the crackdown by various jurisdictions against actors leveraging AI to convey dangerous material, particularly deepfake. A recent action is evident in Korea, where President Yoon Yeol initiated a crackdown on actors behind deepfake pornography.
Yoon decries the use of AI to convey digital sex material, as reported in South Korean media. Of the use, which was recently reported in South Korean media. The Telegram messaging app is allegedly leveraged to create and share the deepfake, often sexually explicit images and clips despite warnings over hurting potential victims.
Yonhap news agency indicates that Korean authorities will aggressively pursue individuals creating and spreading such material in a seven-month initiative that begins on Wednesday.