California Governor Gavin Newsom is championing a long-term agenda, stressing the need for Californians to equip themselves with skills essential to navigate the swiftly evolving landscape of generative artificial intelligence (GenAI). In a recent report, Newsom highlighted the urgency of preparing the workforce for the GenAI economy to ensure individuals can thrive in an era dominated by transformative technological advancements.
The governor’s directive underscores Californians’ need to embrace and adapt to the evolving GenAI landscape. The report emphasizes the imperative for the state’s populace to gain access to comprehensive educational and training opportunities in GenAI.
Specifically, the proposal outlines the plans for state government workers to receive specialized training in state-approved GenAI applications to achieve equitable outcomes across various sectors.
Addressing Employment Shifts Due To GenAI
The move comes as a proactive response to the anticipated employment shifts forecasted by reports on GenAI. Goldman Sachs’ projections, cited in the report, suggest a global impact of 300 million jobs due to the rise of GenAI despite the projected productivity gains.
The report asserts the critical role of the state in spearheading training initiatives. By fostering an environment that supports and trains its workforce in GenAI technologies, California aims to fulfill the robust demand for skilled workers and position itself as a hub for GenAI-driven businesses.
Integration Of GenAI Education
In its recommendation, the report advocates for integrating GenAI education at higher education institutions and vocational schools. This strategic initiative seeks to equip a generation of individuals with the necessary expertise to harness the potential of GenAI effectively.
The report’s assertions align with recent global reports spotlighting AI’s anticipated impact on employment dynamics. The Organisation for Economic Co-operation and Development (OECD) identified high-skill, white-collar jobs as particularly susceptible to the transformative effects of AI.
‘Secure by Design’ AI Guidelines
Meanwhile, a consortium comprising the United States, the United Kingdom, Australia, and 15 other nations has introduced comprehensive guidelines to fortify artificial intelligence (AI) models against tampering. Termed ‘secure by design,’ these guidelines aim to equip AI firms with robust cybersecurity practices throughout the lifecycle of AI development and deployment.
The collaborative effort, unveiled in a 20-page document, emphasizes the critical need for heightened cybersecurity measures within the evolving AI landscape.
Key Emphases Of The Guidelines
The guidelines urged AI firms to exercise stringent control over the infrastructure supporting AI models. Additionally, they stress the importance of comprehensive tracking to detect any instances of tampering before and after the release of AI models.
Furthermore, the guidelines underscore the imperative of training staff members on cybersecurity risks, an essential step to ensure a vigilant and proactive approach to AI security.
However, some contentious issues within the AI domain were notably absent from the guidelines. Such issues include the regulation of deep fakes, image-generating models, and data collection methods employed in training models.
These topics have sparked legal debates, including copyright infringement claims against multiple AI firms.
Global Significance And Industry Response
Meanwhile, US Secretary of Homeland Security Alejandro Mayorkas highlighted the significance of cybersecurity in shaping AI systems that are innovative, safe, and secure. He emphasized the pivotal role of cybersecurity in the era-defining technology of AI.
The release of these guidelines follows a series of global initiatives addressing AI’s impact, including a recent AI Safety Summit in London that brought together authorities and AI firms to collaborate on AI development and regulation.
Simultaneously, the European Union is deliberating on its AI Act to regulate the AI landscape. At the same time, US President Joe Biden’s executive order in October outlined AI safety and security standards.
However, President Biden’s order faces resistance from the AI industry, which fears it would stifle innovation.