NEWS   TOP   TAGS   TODAY   ARCHIVE   EN   ES   RU   FR 
NEWS / 2024 / 01 / 17 / OPENAI'S STRATEGY TO COMBAT ELECTION MISINFORMATION IN 2024

OpenAI's strategy to combat election misinformation in 2024

03:40 17.01.2024

OpenAI, the San Francisco-based artificial intelligence startup behind ChatGPT and DALL-E, has unveiled a comprehensive plan to prevent the misuse of its generative AI tools in spreading election misinformation. As voters in over 50 countries gear up for national elections this year, OpenAI aims to address the potential weaponization of its popular AI tools, which can produce text and images in seconds but can also be manipulated to create misleading content.

The safeguards outlined by OpenAI include a combination of existing policies and new initiatives aimed at curbing the misuse of their technology. The company plans to ban the creation of chatbots that impersonate real candidates or governments, misrepresent the voting process, or discourage voter participation. Additionally, OpenAI will not allow the development of applications for political campaigning or lobbying until further research is conducted on the persuasive power of its technology.

To enhance transparency and traceability, OpenAI will introduce digital watermarks on AI-generated images produced using its DALL-E image generator. These watermarks will provide information about the origin of the content, making it easier to identify whether an image has been created using OpenAI's AI tool.

In collaboration with the National Association of Secretaries of State, OpenAI will direct users of ChatGPT who inquire about voting logistics to accurate information available on the nonpartisan website CanIVote.org. The partnership aims to ensure that users seeking voting-related information are directed to reliable sources.

While the measures taken by OpenAI are seen as a positive step in combating election misinformation, their effectiveness will depend on their implementation. Mekela Panditharatne, counsel in the democracy program at the Brennan Center for Justice, emphasizes the need for comprehensive filters to flag questions about the election process and highlights the importance of minimizing the potential for loopholes.

OpenAI's ChatGPT and DALL-E are among the most powerful generative AI tools available, but there are numerous other companies with similar technology that lack comparable safeguards against election misinformation. Social media platforms like YouTube and Meta have introduced AI labeling policies, but their ability to consistently detect and address violations remains uncertain.

Darrell West, senior fellow in the Brookings Institution's Center for Technology Innovation, suggests that it would be beneficial for other generative AI firms to adopt similar guidelines, enabling industry-wide enforcement of practical rules. Without voluntary adoption across the industry, regulating AI-generated disinformation in politics may necessitate legislation. While there is bipartisan support for such legislation in the U.S., Congress has yet to pass any bills addressing the role of the industry in politics, despite over a third of U.S. states introducing or passing bills to tackle deepfakes in political campaigns.

OpenAI CEO Sam Altman acknowledges the importance of constant vigilance and monitoring, stating that despite the company's safeguards, his mind is not at ease. Altman expressed the need for tight monitoring and feedback loops to ensure that the technology is used responsibly.

The Associated Press (AP) receives support from various private foundations to enhance its coverage of elections and democracy. The AP is solely responsible for all content related to its democracy initiative.

In summary, OpenAI has unveiled a comprehensive plan to prevent the misuse of its generative AI tools in spreading election misinformation. The company's safeguards include banning the creation of chatbots impersonating candidates or governments, directing users to reliable voting information sources, and implementing digital watermarks on AI-generated images. While these measures are seen as positive, their effectiveness will depend on implementation, and there is a call for industry-wide adoption of similar guidelines. The regulation of AI-generated disinformation may require legislation if voluntary adoption is not widespread. OpenAI CEO Sam Altman emphasizes the need for constant monitoring and vigilance to ensure responsible use of the technology.

/ Wednesday, 17 January 2024 /

themes:  ChatGPT  Meta  OpenAI  Sam Altman  AI (Artificial intelligence)

VIEWS: 158


20/05/2024    info@iqtech.top
All rights to the materials belong to their authors.
RSS