Meta and IBM Launch ‘AI Alliance’ Focused on Advancing Open-Source AI Development
Facebook’s parent company, Meta, and IBM have formed the AI Alliance, which advocates for an “open-science” approach to AI development, putting them at odds with competitors Google, Microsoft, and ChatGPT-maker OpenAI.
These two opposing factions—open and closed—argue on whether AI should be built in a way that makes the underlying technology broadly available. The issue of safety is central to the argument, but so is the question of who will benefit from AI developments.
Open advocates want a strategy that is “not proprietary and closed,” according to Daro Gil, senior vice president of IBM’s research group. “So it’s not like a thing that is locked in a barrel and no one knows what they are.”
The IBM and Meta-led AI Alliance, which also includes Dell, Sony, chipmakers AMD and Intel, as well as a number of universities and AI startups, is “coming together to articulate on the matter.” Simply put, the future of AI is going to be built fundamentally on top of the open scientific exchange of ideas and on open innovation, including open source and open technologies,” Gil said in an interview before its announcement. The group is expected to press regulators to ensure that any law benefits them.
Yann LeCun, Meta’s chief AI scientist, took aim on social media this autumn at OpenAI, Google, and the startup Anthropic for what he called “massive corporate lobbying” to write the rules in a way that benefits their high-performing AI models and gives them power over the technology’s development. The three businesses have launched their own industry association, dubbed the Frontier Model Forum, together with OpenAI’s core partner Microsoft.
On X, formerly Twitter, LeCun expressed concern that other scientists’ fearmongering about AI “doomsday scenarios” was giving ammo to those who want to limit open-source research and development.
“In a future where AI systems are poised to constitute the repository of all human knowledge and culture, we need the platforms to be open source and freely available so that everyone can contribute to them,” LeCun said in a statement. “Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture.”
“It’s sort of a classic regulatory capture approach of trying to raise fears about open-source innovation,” said Chris Padilla, the global government affairs team leader at IBM. “I mean, hasn’t this been the Microsoft model for decades?” They have long been resistant to open-source apps that compete with Windows or Office. They’re following a similar strategy here.”
The word “open-source” refers to a decades-old practice of creating software in which the source code is freely or publicly available for anyone to view, change, and expand upon.
Open-source AI entails more than simply code, and computer scientists disagree on how to define it based on whether components of the technology are publicly available and if there are any limits on its usage. Some people use the term “open science” to denote the larger idea.
Part of the misunderstanding around open-source AI arises from the fact that, despite its name, OpenAI—the firm behind ChatGPT and the image-generator Dall-E—creates AI systems that are clearly closed.
“To state the obvious, there are near-term and commercial incentives against open source,” stated Ilya Sutskever, chief scientist and co-founder of OpenAI. However, there is a longer-term concern about the possibility of an AI system with “mind-bendingly powerful” powers that would be too hazardous to make public, he added.
According to David Evan Harris of the University of California, Berkeley, even present AI models offer risks and might be used, for example, to scale up misinformation operations to destabilize democratic elections.
“Open source is really great in so many dimensions of technology,” Harris added, but AI is different.
“Anyone who watched the movie Oppenheimer knows this: that when big scientific discoveries are being made, there are lots of reasons to think twice about how broadly to share the details of all of that information in ways that could get into the wrong hands,”
The Center for Humane Technology, a long-time opponent of Meta’s social media tactics, is one of the organizations highlighting the dangers of open-source or leaked AI models.
“As long as there are no guardrails in place right now, it’s just completely irresponsible to be deploying these models to the public,” said Camille Carlton, the group’s executive director.
An increasingly public dispute has erupted regarding the advantages and disadvantages of using an open-source approach to AI development.
It was easy to overlook the “open-source” issue in the midst of the uproar over Joe Biden’s broad executive order on AI.
The US president’s decree classified open models as “dual-use foundation models with widely available weights” and stated that more research was needed. Weights are numerical factors that impact the performance of an AI model.
When such weights are made public on the internet, “there can be significant benefits to innovation but also substantial security risks, such as the removal of safeguards within the model,” according to Biden’s directive. He gave Gina Raimondo, the commerce secretary, until July to consult with experts and provide suggestions on how to balance the possible rewards and hazards.
The European Union is running out of time to work things out. Officials attempting to clinch approval of world-leading AI law are still discussing numerous measures, including one that might exclude certain “free and open-source AI components” from restrictions impacting commercial models, as discussions come to a close today.
ALSO READ: Google Pushes Gemini AI Launch to Early 2024: Reasons for The Delay
PTA Taxes Portal
Find PTA Taxes on All Phones on a Single Page using the PhoneWorld PTA Taxes Portal
Explore NowFollow us on Google News!