FILE - Alphabet CEO Sundar Pichai speaks about Google DeepMind at a Google I/O event in Mountain View, Calif., May 10, 2023. (AP Photo/Jeff Chiu, File)FILE - Alphabet CEO Sundar Pichai speaks about Google DeepMind at a Google I/O event in Mountain View, Calif., May 10, 2023. (AP Photo/Jeff Chiu, File)

Technology leaders have shown major support for laws to govern artificial intelligence use. At the same time, they are seeking to guarantee that any future AI rules work in their favor.

The technology industry is increasingly divided about how to govern AI. One side supports an “open science” method to AI development; the other supports a closed method.

Facebook parent Meta and IBM recently launched a new group called the AI Alliance. The group supports the “open science” method of AI development. On the other side are companies such as Google, Microsoft and ChatGPT-maker OpenAI.

Safety is at the heart of the debate. But, tech leaders are also arguing about who should profit from AI developments.

What is open-source AI?

The term “open-source” comes from a common method of building software in which the code is widely available at no cost. Anyone can examine and make changes to it.

Open-source AI involves more than just code. Computer scientists differ on how to define “open source.” They say the identifications are dependent on which parts of the technology are publicly available and if there are restrictions on use.

Some computer scientists use the term “open science” to describe the wider philosophy.

IBM and Meta lead the AI Alliance. Members include Dell, Sony, chipmakers AMD and Intel, and several universities and smaller AI companies. The alliance is coming together to say “that the future of AI is going to be built … on top of the open scientific exchange of ideas and on open innovation, including open source and open technologies,” said Darío Gil of IBM. Gil made the comment in a discussion with The Associated Press.

Concerns about open-source AI

Part of the confusion about open-source AI is that the company that built ChatGPT and the image-generator DALL-E is called OpenAI. But its AI systems are closed.

“There are near-term and commercial incentives against open source,” said Ilya Sutskever, OpenAI’s chief scientist and co-founder, in a video with Stanford University in April.

But there is also a longer-term worry about the open development method. Sutskever noted one worry is that an AI system with powerful abilities could be too dangerous to make available to the public.

For example, he described a possible AI system that could learn how to start its own biological laboratory.

Even current AI models present risks. They could create disinformation campaigns, for example, said David Evan Harris of the University of California, Berkeley. Such campaigns could disrupt democratic elections, he said.

“Open source is really great in so many dimensions of technology,” but AI is different, Harris said.

The Center for Humane Technology, a longtime critic of Meta’s social media activities, is among the groups drawing attention to the risks of open-source or leaked AI models.

“As long as there are no guardrails in place right now, it’s just completely irresponsible to be deploying these models to the public,” said the group’s Camille Carlton.

Benefits and dangers

An increasingly public debate has appeared over the good and bad of using an open-source method to AI development.

Meta’s chief AI scientist, Yann LeCun, this fall criticized OpenAI, Google, and Anthropic on social media for what he described as “massive corporate lobbying.” Le Cun argues that the companies are trying to write rules in a way that help their high-performing AI models and could help them hold their power over the technology’s development. The three companies, along with OpenAI’s key partner Microsoft, have formed their own industry group called the Frontier Model Forum.

LeCun said on X, formerly Twitter, “Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture.”

For IBM, the dispute feeds into a much longer competition that began before the AI boom. IBM was an early supporter of the open-source Linux operating system in the 1990s.

Chris Padilla leads IBM’s international government affairs team. The companies are trying to raise fear about open-source innovation as they have in the past, he suggested.

He added, “I mean, this has been the Microsoft model for decades, right? They always opposed open-source programs that could compete with Windows or Office. They’re taking a similar approach here.”

I’m John Russell.

Matt O’Brien reported on this story for the Associated Press. John Russell adapted it for VOA Learning English.

__________________________________________________

Words in This Story

innovation – n. the act of introducing new ideas, devices, or methods

incentive – n. something that encourages a person to do something

disrupt – v. to interrupt the normal progress or activity of something

dimension – n. a part of something

guardrail – n. a protective device along the side of a road that prevents vehicles from driving off the road (can be used metaphorically)

lobby – v. to try to influence government officials to make decisions for or against something