According to the agreement, general AI and ChatGPT models should adhere to transparency regulations before entering the market.

On December 8, the European Parliament and Council negotiators reached a temporary consensus regarding the regulations associated with using artificial intelligence (AI). 

EU Reach Agreement Regarding AI Transparency, Biometric Surveillance and Control 

This agreement entails the governmental utilization of artificial intelligence in biometric surveillance and transparency regulations to be embraced before entering the market.

Its scope extends to ascertain how artificial intelligence systems, for instance, ChatGPT, are to be controlled. The agreement also entails technical documents, compliance with European Union (EU) copyright, and disclosing training context summaries.

The European Union intends to be the initial supranational authority with artificial intelligence regulations, stipulating how they should be utilized constructively while safeguarding against risks. The deal was agreed on December 8 after a debate that lasted approximately 24 hours and subsequent negotiations that lasted 15 hours.

AI Systems Models to Disclose Systemic Risks to the European Commission

According to the agreement, artificial intelligence models with considerable effect and systemic risks should assess and address the risks, carry out antagonistic testing for system resilience, disclose incidents to the European Commission, divulge energy efficacy, and promote cybersecurity. 

Appropriate execution will be critical. As such, the Parliament will closely back new business ideas using sandboxes and practical guidelines for the most robust models.

Thierry Breton, European Commissioner for Internal Market, posted on X (formerly Twitter) following the deal’s completion.

The agreement shows that general-purpose AI with inherent risks should adhere to codes. Governments should only utilize real-time biometric surveillance in particular instances, such as crimes or grave dangers in public areas.

EU Prohibits Cognitive Behavioral Manipulation

Further, this deal bans cognitive behavioural manipulation, social scoring, scraping facial images from CCTV footage or the internet, and gathering private information such as orientation and belief by biometric systems. Customers possess the liberty to file grievances and acquire clarifications.

Contraventions would attract fines ranging from €7.5M ($8.1M), 7% of worldwide revenue, or 1.5% of revenue, amounting to €35M ($37.7).

The statement by the European Parliament shows that the agreed text must be officially approved by the council and Parliament before becoming a European Union regulation. During a forthcoming meeting, the Parliament’s civil liberties and internal market teams will vote on the agreement. 

Michael Scott

By Michael Scott

Michael Scott is a skilled and seasoned news writer with a talent for crafting compelling stories. He is known for his attention to detail, clarity of expression, and ability to engage his readers with his writing.