Last month, hundreds of well-known figures in the world of artificial intelligence signed an open letter warning that AI could one day destroy humanity.
“Mitigating the risk of extinction from AI should be a global priority, along with other societal-level risks such as pandemics and nuclear war,” one sentence statement Said.
The letter was the latest in a series of ominous warnings about AI that are particularly light on the details. Today’s AI systems cannot destroy humanity. Some of them can hardly add and subtract. So why are the people who know most about AI so worried?
One day, says Cassandra of the tech industry, companies, governments or independent researchers may deploy powerful AI systems to handle everything from business to warfare. Those systems can do things we don’t want them to do. And if humans tried to interfere or shut them down, they could resist or repeat themselves so they could continue working.
“Today’s systems are nowhere close to posing an existential risk,” said Yoshua Bengio, a professor and AI researcher at the University of Montreal. “But in one, two, five years? There’s too much uncertainty. That’s the point. We’re not sure it won’t pass a point where things become catastrophic.”
Concerns often use a simple metaphor. If you ask a machine to make as many paper clips as possible, he says, it may take away and everything – including humanity – may turn into paper clip factories.
How does this tie into the real world – or a fictional world not many years into the future? Companies can give AI systems greater autonomy and link them to critical infrastructure, including power grids, stock markets and military weapons. From there they can be in trouble.
To many experts, all of this didn’t seem plausible until the last year or so, when companies like OpenAI demonstrated significant improvements in their technology. It showed what could be possible if AI continued to advance at such a rapid pace.
“AI will increasingly be delegated, and may – as it becomes more autonomous – take over decision-making and thinking from current humans and human-run institutions,” said Anthony Aguirre, a cosmologist at the University of California, Santa Cruz and said the founder. Future of Life Institute, the organization behind one of the two open letters.
“At some point, it will become clear that the big machine that runs society and the economy is not really under human control, nor can it be shut down, much like the S&P 500 can be shut down,” he said.
Or so the theory goes. Other AI experts think this is a ridiculous premise.
“Hypotheticals are a polite way of expressing what one thinks about the existential risk thing,” said Oren Etzioni, the founding CEO of the Allen Institute for AI, a research lab in Seattle.
Are there signs that AI can do this?
not enough. But researchers are turning chatbots like chatbots into systems that can take action based on the text they generate. A prime example is a project called AutoGPT.
The purpose is to give the system a goal such as “build a company” or “make some money”. It will then keep looking for ways to reach that goal, especially if it is connected to other Internet services.
A system such as AutoGPT can generate computer programs. If researchers give it access to computer servers, it can actually run those programs. In theory, it’s a way for AutoGPT to do almost anything online — find information, use apps, create new apps, even improve yourself.
Systems like AutoGPT are not working properly right now. They get caught in endless loops. The researchers gave a system all the resources it needed to replicate itself. couldn’t do it,
In time, those limits could be fixed.
“People are actively trying to build systems that improve themselves,” said Conor Leahy, founder of Guess, a company that says it seeks to align AI technologies with human values. “Currently, it doesn’t work. But someday, it will. And we don’t know when that day is.
Mr Leahy argues that these systems are given goals such as “making some money” as researchers, companies and criminals can break into banking systems, revolutionizing the country where they hold oil futures Or repeat themselves when someone tries to change them. Close.
Where do AI systems learn to misbehave?
AI systems like ChatGPT are built on neural networks, mathematical systems that can learn skills by analyzing data.
Around 2018, companies such as Google and OpenAI began building neural networks that learned from large amounts of digital text from across the Internet. By accurately pinpointing patterns in all this data, these systems learn to generate writing on their own, including news articles, poems, computer programs, even human interactions. Result: Chatbots like ChatGPT.
Because they learn from more data than even their creators can understand, these systems also exhibit unpredictable behavior. researchers recently showed that a system was able to Hire a human online to crack captcha test, When the human asked if it was “a robot”, the system lied and said it was a visually impaired person.
Some experts worry that as researchers make these systems more powerful, training them on large amounts of data, they could learn more bad habits.
Who are the people behind these warnings?
In the early 2000s, a young writer named Eliezer Yudkowsky began to warn that AI could destroy humanity. His online posts spawned a community of believers. This community, called rationalists or effective philanthropists, became highly influential in academia, government think tanks, and the technology industry.
Mr. Yudkowski and his writings were instrumental in the creation of both OpenAI and DeepMind, an AI lab that Google acquired in 2014. And many people in the “EAs” community worked inside these labs. They believed that because they understood the dangers of AI, they were in the best position to create it.
Two organizations that recently issued open letters warning of the risks of AI – the Center for AI Safety and the Future of Life Institute – are closely linked to this movement.
Recent warnings also come from research pioneers and industry leaders like Elon Musk, who have long warned about the risks. The latest letter was signed by Sam Altman, chief executive officer of OpenAI; and Demis Hassabis, who helped found DeepMind and now oversees a new AI lab that combines top researchers from DeepMind and Google.
Other well-respected figures signed one or both of the warning letters, including Dr. Bengio and Geoffrey Hinton, who recently stepped down as an executive and researcher at Google. In 2018, he received the Turing Award, often referred to as the “Nobel Prize of computing”, for his work on neural networks.