NEWS   TOP   TAGS   TODAY   ARCHIVE   EN   ES   RU   FR 
NEWS / 2024 / 02 / 16 / TECH COMPANIES UNITE TO COMBAT AI-GENERATED ELECTION TRICKERY

Tech companies unite to combat AI-generated election trickery

22:22 16.02.2024

Major Technology Companies Sign Accord to Combat AI-Generated Deepfakes in Elections

Tech giants including Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, and TikTok have come together to voluntarily adopt measures to prevent the use of artificial intelligence (AI) tools to disrupt democratic elections worldwide. The agreement was announced at the Munich Security Conference, where executives from the companies gathered to unveil a new framework for responding to AI-generated deepfakes that aim to deceive voters. Elon Musk's X and twelve other companies are also signing on to the accord.

Nick Clegg, President of Global Affairs for Meta, the parent company of Facebook and Instagram, emphasized the need for collective action, stating, "Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own." While the accord is largely symbolic, it specifically targets increasingly realistic AI-generated images, audio, and video that deceitfully alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in democratic elections. It also addresses the dissemination of false information to voters regarding voting procedures.

The companies have not committed to banning or removing deepfakes but have outlined methods to detect and label deceptive AI content when it is created or distributed on their platforms. They have pledged to share best practices and respond swiftly and proportionately when such content begins to spread. However, the vagueness of the commitments and the absence of binding requirements may disappoint pro-democracy activists and watchdogs seeking stronger assurances.

Clegg defended the flexible approach, stating that each company has its own content policies and that the aim is not to impose restrictions but rather to address the challenges posed by AI. He added, "No one in the industry thinks that you can deal with a whole new technological paradigm by sweeping things under the rug and trying to play whack-a-mole."

The announcement was attended by several European and U.S. political leaders, including European Commission Vice President Vera Jourova. Jourova acknowledged the agreement's limitations but praised its positive elements. She urged politicians to take responsibility and avoid deceptive use of AI tools, warning that the combination of AI and disinformation campaigns could potentially undermine democracy.

The agreement comes at a critical time, with over 50 countries scheduled to hold national elections in 2024. Instances of AI-generated election interference have already been observed in countries such as Bangladesh, Taiwan, Pakistan, and Indonesia. For example, AI robocalls mimicking U.S. President Joe Biden's voice attempted to discourage voting in New Hampshire's primary election last month. Similarly, AI-generated audio recordings impersonated a liberal candidate discussing plans to manipulate elections in Slovakia.

The accord recognizes the importance of context and the protection of educational, documentary, artistic, satirical, and political expression in responding to AI-generated deepfakes. The companies also commit to transparency regarding their policies on deceptive AI election content and to educating the public about identifying and avoiding AI fakes. While some companies have already implemented safeguards on their generative AI tools and are working on identifying and labeling AI-generated content, there is still pressure from regulators to do more.

In the United States, where there is no federal legislation regulating AI in politics, companies are largely self-governing. However, many states are considering implementing measures to regulate the use of AI in elections and other applications. The Federal Communications Commission has recently confirmed that AI-generated audio clips in robocalls are illegal, but this does not cover audio deepfakes circulating on social media or in campaign advertisements.

Experts warn that while AI deepfakes pose a significant threat to influencing voters, cheaper and simpler forms of misinformation remain prevalent. The accord acknowledges this, recognizing that traditional manipulations, known as "cheapfakes," can be used for similar purposes.

In conclusion, the voluntary accord signed by major technology companies represents a collective effort to address the potential misuse of AI-generated deepfakes in elections. While the commitments outlined in the agreement may not satisfy all stakeholders, they mark a step towards greater transparency, collaboration, and accountability in combating AI-driven disinformation.

/ Friday, 16 February 2024 /

themes:  Meta  Amazon  OpenAI  USA  Elon Musk  Facebook  Microsoft  TikTok  Google  AI (Artificial intelligence)

VIEWS: 270


18/05/2024    info@iqtech.top
All rights to the materials belong to their authors.
RSS