NEWS   TOP   TAGS   TODAY   ARCHIVE   EN   ES   RU   FR 
NEWS / 2023 / 12 / 06 / EUROPE'S AI REGULATION LEADERSHIP HANGS IN THE BALANCE

Europe's AI Regulation Leadership Hangs in the Balance

15:54 06.12.2023

The European Union's push to approve the world's first comprehensive artificial intelligence (AI) rules is at risk of being upended due to the boom in generative AI technology. Governments worldwide are scrambling to regulate this emerging technology, but the recent emergence of generative AI systems like OpenAI's ChatGPT has raised concerns about the risks they pose. The EU's Artificial Intelligence Act, hailed as a pioneering rulebook, may not be finalized as the EU's three branches of government struggle to reach a deal in the final round of talks.

Europe has been working for years to establish AI regulations, but the rapid development of generative AI has complicated these efforts. Countries like the United States, the United Kingdom, China, and global coalitions like the Group of 7 major democracies are now racing to catch up and regulate this technology. However, they have not yet reached the level of progress seen in Europe.

In addition to regulating generative AI, EU negotiators are faced with resolving other contentious issues, including limits on AI-powered facial recognition and surveillance systems that have raised privacy concerns. Despite the challenges, there is optimism that a political agreement can be reached, as all negotiators seek a win in this flagship legislative effort. However, the significant and critical issues at hand mean that the possibility of not finding a deal cannot be ruled out.

Although 85% of the technical wording in the bill has already been agreed upon, the failure to reach an agreement in the current round of talks could result in a delay until after the EU-wide elections in June or a change in direction as new leaders take office. One of the key sticking points is the regulation of foundation models, which are the advanced systems underlying general-purpose AI services like OpenAI's ChatGPT and Google's Bard chatbot. These large language models are trained on vast amounts of data scraped from the internet and give generative AI systems the ability to create something new.

The AI Act was initially intended as product safety legislation, grading AI uses according to four levels of risk. The proposal was expanded to cover foundation models in response to the release of new wave generative AI systems since the legislation's first draft in 2021. Researchers have raised concerns about the potential misuse of these powerful foundation models by big tech companies, such as using them for online disinformation, manipulation, cyberattacks, or the creation of bioweapons.

France, Germany, and Italy have resisted the update to the legislation, advocating for self-regulation instead. This change in stance is seen as an attempt to support their own generative AI players, such as Mistral AI in France and Aleph Alpha in Germany, to compete with major U.S. tech companies like OpenAI.

While there has been some movement on the issue of foundation models, there are still challenges in reaching an agreement on facial recognition systems. Brando Benifei, an Italian member of the European Parliament involved in the negotiations, remains optimistic about resolving differences with member states. AP Technology Writer Matt O'Brien contributed to this report from Providence, Rhode Island.

/ Wednesday, 6 December 2023 /

themes:  Internet  ChatGPT  China  Germany  OpenAI  AI (Artificial intelligence)  Google

VIEWS: 262


20/05/2024    info@iqtech.top
All rights to the materials belong to their authors.
RSS