FILE - Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (REUTERS/Dado Ruvic/Illustration/File Photo)FILE - Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. (REUTERS/Dado Ruvic/Illustration/File Photo)

In the middle of the 20th century, computer scientists dreamed of building a computer as smart as humans. Now, tech companies are in a race to build it.

Called artificial general intelligence, or AGI, the systems would do much more than current AI technology. AGI would be as good as humans in many areas of human thinking. These include planning, problem solving, and learning from experience.

Google, Meta, Microsoft, Amazon and Chat-GPT maker OpenAI are all working to develop AGI.

At the same time, governments and leading AI scientists worry about ways the technology could be dangerous for humanity.

AGI development

As artificial intelligence has developed, the meaning of AGI has changed over time.

Geoffrey Hinton is a scientist who helped create the first AI systems. He told The Associated Press that 20 years ago, people would have agreed that a system like Chat-GPT had general intelligence. But now people think AGI should be able to do more than what Chat-GPT does.

Still, scientists have not yet agreed on a clear definition of AGI. “I don’t think there is agreement on what the term means,” Hinton said by email.

FILE - AI scientist Geoffrey Hinton poses at Google's Mountain View, Calif, headquarters on March 25, 2015. Hinton prefers a term for AGI — superintelligence — "for AGIs that are better than humans." (AP Photo/Noah Berger, File)
FILE – AI scientist Geoffrey Hinton poses at Google’s Mountain View, Calif, headquarters on March 25, 2015. Hinton prefers a term for AGI — superintelligence — “for AGIs that are better than humans.” (AP Photo/Noah Berger, File)

Most current artificial intelligence systems are generative AI. That means they produce things such as texts or images. While generative AI is mainly limited to such tasks, AGI could do many different kinds of work.

Pei Wang is a professor who teaches an AGI course at Temple University in the American state of Pennsylvania. He helped organize the first AGI conference in 2008. He said that adding the letter ‘G’ to AI sent an important message. It showed that computer scientists “still want to do the big thing. We don’t want to build tools. We want to build a thinking machine,” Wang said.

Dr. Pei Wang teaches an artificial general intelligence class at Temple University in Philadelphia, Thursday, Feb. 1, 2024. (AP Photo/Matt Rourke)
Dr. Pei Wang teaches an artificial general intelligence class at Temple University in Philadelphia, Thursday, Feb. 1, 2024. (AP Photo/Matt Rourke)

Finding agreement on how to measure AGI is one of the issues that will be discussed next month at an AGI conference in Vienna, Austria.

“This really needs a community’s effort and attention so that mutually we can agree on some sort of classifications of AGI,” said Jiaxuan You. He is an assistant professor at the University of Illinois Urbana-Champaign.

OpenAI, based in the state of California, says its nonprofit board of directors will decide when its systems have reached AGI. OpenAI defines AGI as a system that can “outperform humans at most economically valuable work.”

If OpenAI develops AGI, its partner Microsoft will not have the rights to make money from it. In 2020, Microsoft signed an agreement with OpenAI that gave Microsoft the rights to OpenAI’s AI models. However, that agreement is supposed to end once OpenAI has created AGI.

Is AGI dangerous?

Last year the 76-year-old Hinton quit his job at Google. He wanted to share his worries about the possible dangers of AI.

A recent study in the journal Science also warned of AI systems able to plan for the future.

Michael Cohen is a researcher at the University of California, Berkeley, and lead writer of the study. He said he worries that an AGI system might create plans to make humans disappear.

“I hope we’ve made the case that people in government (need) to start thinking seriously about exactly what regulations we need to address this problem,” Cohen said.

Push to develop AGI

Some people in the tech industry want to develop AGI slowly and carefully. Still, companies are competing to develop the systems.

London-based Deep Mind was founded in 2010 with the goal of developing AGI. It is now owned by Google. OpenAI was founded in 2015 to develop AGI, but with the added goal of making the technology safe.

In January, Meta CEO Mark Zuckerberg said his company’s long-term goal was to build “full general intelligence”.

At Amazon, the head scientist for the voice assistant Alexa changed job titles to become head scientist for AGI.

You, the University of Illinois researcher, said working on AGI may help tech companies find the most talented computer scientists. He said many researchers would choose to work on AGI instead of generative AI.

I’m Andrew Smith. And I’m Dorothy Gundy.

Matt O’Brian wrote this story for The Associated Press. Andrew Smith adapted it for VOA Learning English.

______________________________________________

Words in This Story

task n. – a job, function, or piece of work to do

mutually adv. – both, together

classification n. – a system of organization or putting into groups