Google’s Restrictions On Bard AI
Search engine giant Google has announced plans to impose restrictions on specific election-related queries handled by its AI chatbot, Bard. This action sheds light on the company’s approach to controlling AI-generated content in the lead-up to the 2024 US presidential election.
In a blog post released on December 19, the tech giant emphasized the need to curtail the responses provided by Bard concerning election matters. This proactive step is set to be in place by early 2024, coinciding with the upcoming US presidential elections.
A Focus On AI Transparency And Control
With a heightened focus on AI’s role in shaping public discourse, Google previously mandated AI disclosures in political campaign ads on various platforms, including YouTube, a subsidiary of Google.
Furthermore, Google shared insights into the beta stage development of SynthID, a tool by Google’s DeepMind. The tool aims to embed digital watermarks into AI-generated images and audio, reinforcing efforts to track the origin of such content.
Addressing Concerns And Broader Implications
Google’s move echoes Meta’s decision to bar political advertisers from using ad-creation tools with AI origin on its platform last month. The discussion surrounding AI’s impact on elections is gaining momentum as the US elections draw close.
Studies have surfaced indicating potential misleading information provided by AI-powered chatbots. For instance, one study on Microsoft’s Bing AI chatbot, Copilot, highlighted inaccuracies in around 30% of its election-related responses.
Google’s decision to limit Bard’s scope in handling election queries marks a significant step toward regulating AI-generated information during crucial political events. Furthermore, it underscores these tech giants’ increasing responsibility to maintain information accuracy and transparency.
US Standards Group Calls For Public Input On AI Safety And Risk Mitigation
Meanwhile, the US National Institute of Standards and Technology (NIST) has solicited feedback from AI companies and the general public regarding the management of risks associated with generative AI. Part of the feedback includes mitigating the spread of AI-generated misinformation.
The request by NIST, operating under the US Department of Commerce, aligns with its responsibilities outlined in the latest presidential executive order. This order promotes secure and responsible artificial intelligence (AI) development and use.
With particular reference to generative AI, which can produce text, photos, and videos based on open-ended prompts, NIST’s inquiry aims to address the excitement and concerns of this technology. The request also seeks insights on managing generative AI risks and strategies to minimize the propagation of AI-generated misinformation.
Red-Teaming Practices And AI Consortium Formation
Moreover, NIST seeks guidance on “red-teaming” practices within AI risk assessment, drawing inspiration from Cold War simulations. Red-teaming involves simulating adversarial scenarios to assess vulnerabilities and weaknesses within a system or organization.
This approach has historically been instrumental in cybersecurity to identify potential risks. NIST’s recent efforts, including the formation of an AI consortium and invitations for relevant applicants, underscore the commitment to developing measures focused on a human-centered approach to AI safety and governance.
These initiatives reflect a concerted effort to regulate the evolving landscape of AI technology, with emphasis on safety, reliability, and ethical considerations in its development and implementation.