NEWS   TOP   TAGS   TODAY   ARCHIVE   EN   ES   RU   FR 
NEWS / 2023 / 12 / 05 / TECH GIANTS DIVIDED: AI'S FUTURE OPEN-SOURCE OR CLOSED AS THEY LOBBY REGULATORS

Tech giants divided: AI's future open-source or closed as they lobby regulators

22:06 05.12.2023

Tech leaders, including Facebook parent Meta and IBM, have formed the AI Alliance, a group advocating for an "open science" approach to AI development. This puts them at odds with rivals such as Google, Microsoft, and ChatGPT-maker OpenAI, who prefer a closed approach. The disagreement centers around whether AI should be built in a way that makes the underlying technology widely accessible. Safety and profit are key concerns in the debate. Open advocates argue for a non-proprietary and open approach, while closed proponents worry about the potential dangers of openly accessible AI systems.

Open-source AI refers to the practice of building software with freely accessible code. However, the definition varies depending on how much of the technology is publicly available and if there are any restrictions on its use. The AI Alliance, led by IBM and Meta, aims to build the future of AI on open scientific exchange and innovation, including open source and open technologies. The confusion arises because OpenAI, despite its name, builds closed AI systems. There are near-term and commercial incentives against open source, according to Ilya Sutskever, OpenAI's chief scientist and co-founder. He also highlights the potential dangers of an AI system with powerful capabilities being publicly accessible.

Concerns about open-source AI include the risks posed by current AI models, such as their potential use in disinformation campaigns to disrupt democratic elections. Critics argue that open source can be irresponsible and that caution should be exercised in sharing sensitive information. The Center for Humane Technology, a critic of Meta's social media practices, highlights the risks of open-source or leaked AI models. They consider deploying these models to the public without proper guardrails to be completely irresponsible.

The debate over open-source AI has become increasingly public. Meta's chief AI scientist, Yann LeCun, criticized OpenAI, Google, and startup Anthropic for what he described as "massive corporate lobbying" to write rules that benefit their AI models and concentrate power in their hands. LeCun expressed concern that fearmongering about AI "doomsday scenarios" was giving ammunition to those who want to ban open-source research and development. He argues that openness is necessary to reflect the entirety of human knowledge and culture in AI platforms.

For IBM, the dispute is part of a longer competition that predates the AI boom. Chris Padilla, who leads IBM's global government affairs team, sees it as a classic regulatory capture approach aimed at raising fears about open-source innovation. He compares it to Microsoft's opposition to open-source programs that could compete with Windows or Office.

In the realm of government action, the debate around open-source AI was not prominent in U.S. President Joe Biden's executive order on AI. However, the order did mention the need for further study on "dual-use foundation models with widely available weights." Weights are numerical parameters that influence an AI model's performance. The order acknowledged the benefits of innovation but also highlighted the security risks associated with widely available weights.

Overall, the debate over open-source AI revolves around the accessibility, safety, and profit of AI technology. Tech leaders are divided on whether the underlying technology should be widely accessible or closely guarded. As AI continues to advance, this debate will play a significant role in shaping its future development and regulation.

/ Tuesday, 5 December 2023 /

themes:  Meta  Windows  OpenAI  USA  Facebook  Microsoft  Google

VIEWS: 248


20/05/2024    info@iqtech.top
All rights to the materials belong to their authors.
RSS