The Microsoft-backed OpenAI is leading over 200 firms and industry executives in forming the US AI Safety Institute, which aims to guarantee responsible AI innovation.
The Sam Altman-led OpenAI joins other tech giants, including Amazon, Google, Microsoft, and Apple, in forming the AI Safety Institute Consortium (AISIC). Its formation aligns with the executive order issued by President Joe Biden four months ago.
AISIC Attracts Hundreds of Tech Companies
The AISIC boasts over 200 representatives from top-tier rivals, including Nvidia, OpenAI, Apple, Google, Anthropic, and Amazon. The consortium comprises AI developers, scholars, industry researchers, and government and civil society representatives.
The AISIC targets uniting all stakeholders towards safe and trustworthy AI development and deployment. The US Commerce Secretary Gina Raimondo hailed the diverse membership, heeding President Biden’s call to pull all levers towards guaranteeing safety standards and safeguarding the innovation ecosystem.
Raimondo added that the US AISIC seeks to help the industry achieve AI innovation without compromising safety. She said the consortium formation arose from President Biden’s October Executive Order.
Raimondo explained that the October Executive Order featured a directive to develop guidelines to facilitate the evaluation of AI models. Besides, the order outlined the need to deploy risk management, safety checks, security metrics, and integrating watermarks to identify AI-generated content.
AISIC Poised to Restore US Competitiveness in Responsible and Safe AI Development
Raimondo asserted the resolve of AISIC to ensure the US remains in the leading pack of responsible and safe AI innovation. In particular, the Commerce Secretary hailed the diversity in the AISIC composition as capable of influencing the development of viable solutions to retain US competitiveness in responsible AI development.
The consortium draws additional representatives from workers’ unions, scholars, banking, and healthcare. The commerce department hailed JPMorgan, Bank of America, and Citigroup for joining Ohio State University and Carnegie Mellon University.
Georgia Tech Research Institute and state and local governments are also identified with the AISIC. The AISIC anticipates collaboration with international partners.
The Commerce Department profiled the consortium as the largest assembly of test and evaluation teams oriented towards testing and evaluating the deployment of measurement science to AI safety.
The department assured that the consortium will involve like-minded nations in developing interoperable and effective tools for AI safety.
The participants’ list is extensive, leaving a few notable companies unrepresented. Tesla and Oracle lead other top-ten-ranked tech companies alongside Broadcom. Notably, TSMC is yet to join the list though a foreign-based company.
AISIC to Expedite Measures to Counter Misuse of AI-generated Output
The formation of AISIC is timely, given the rapid spread of generative AI tools and their integration into mainstream sectors. This is leading to countless misuses and surging AI-generated deepfakes online.
US political leaders, including President Biden and Donald Trump, have become victims of deepfakes, as has global music star Taylor Swift.
The influx of deepfakes prompted the US Federal Communications Commission (FCC) to declare that AI-generated robocalls leveraging deepfake voices are prohibited and illegal.
The FCC lamented the escalation of robocalls, which can confuse clients with misinformation imitating celebrities, politicians, and family members.
Google Pledges Support for Responsible AI Development
World leaders have profiled AI as matching the nuclear war attention since the Microsoft-backed OpenAI unveiled GPT-4 in early 2023.
Concerns regarding AI prompted the Biden Administration to a meeting in May last year. The meeting attracted tech and AI companies that pledged to develop AI responsibly. The majority of the companies have now joined the AISIC consortium.
Google’s head of Global Affairs, Kent Walker, indicated that without collaboration, none of the entities can secure the AI right. The executive hailed joining other tech companies to endorse safe and responsible AI development commitments.
Walker is optimistic that the consortium facilitates working together and sharing information.