When OpenAI released ChatGPT in November, it immediately captured the public’s imagination with its ability to answer questions, write poetry, and more on almost any topic. But technology can mix fact with fiction and even create information, a phenomenon scientists call “hallucinations.”
ChatGPT is powered by what AI researchers call neural networks. It’s the same technology that translates between French and English on services like Google Translate and identifies pedestrians on city streets as self-driving cars. A neural network learns skills by analyzing data. For example, by pointing out patterns in thousands of cat photos, it can learn to recognize a cat.
Researchers from labs like OpenAI have designed neural networks that analyze large amounts of digital text, including Wikipedia articles, books, news stories and online chat logs. These systems, known as large language models, have learned to generate text on their own, but may erroneously repeat information or combine facts in ways that produce false information.
In March, the Center for AI and Digital Policy, an advocacy group pushing for the ethical use of the technology, asked the FTC to stop OpenAI from releasing new commercial versions of ChatGPT, citing concerns over bias, disinformation, and security. asked for.
The organization updated the complaint less than a week ago, describing additional ways the chatbot could cause harm, which OpenAI also pointed out.
“The company itself has acknowledged the risks associated with the product’s release and sought regulation on its own,” said Mark Rotenberg, president and founder of the Center for AI and Digital Policy. “The Federal Trade Commission needs to act.”
OpenAI is working to refine ChatGPT and reduce the frequency of biased, false or otherwise harmful content. As employees and other testers use the system, the company asks them to evaluate the usefulness and accuracy of its responses. Then through a technique called reinforcement learning, it uses these ratings to more carefully define what the chatbot will and won’t do.