New uncensored chatbots ignite free-speech fracas

New uncensored chatbots ignite free-speech fracas


AI chatbots have lied notable figuressend biased messages, spread misinformation or even advise users how to commit suicide,

To mitigate the most obvious dangers of the tools, companies like Google and OpenAI have carefully added controls that limit what the tools can say.

Now a new wave of chatbots, developed away from the center of the AI ​​boom, are coming online without many of them guardrails – triggering a polarizing free-speech debate over whether chatbots should be moderated, And who should decide.

“It’s about ownership and control,” wrote Eric Hartford, developer of WizardLM-Uncensored, an unmodded chatbot, in a blog post, “If I ask my model a question, I want an answer, I don’t want her to argue with me.”

Several uncensored and loosely moderated chatbots have come into existence in recent months under names such as GPT4All And FreedomGPT, Many of these were built for little or no money by independent programmers or teams of volunteers who successfully replicated methods previously described by AI researchers. Only a few groups built their models from scratch. Most groups work from existing language models, adding only additional instructions to improve how the technology responds to prompts.

Uncensored chatbots offer exciting new possibilities. Users can download an unrestricted chatbot to their computers, without the oversight of Big Tech. They can then train it on private messages, personal emails or secret documents without risking a breach of confidentiality. Volunteer programmers can develop clever new add-ons, moving faster and perhaps more randomly than large companies.

But the risks appear to be just as numerous – and some say they present threats that must be addressed. Misinformation watchdogs, already wary of how mainstream chatbots can spread falsehoods, have raised concerns about how unregulated chatbots will amplify the threat. Experts have warned that these models may present details of child pornography, hateful satire or false content.

While large corporations have moved forward with AI tools, they have also struggled with how to protect their reputations and maintain investor confidence. It seems that independent AI developers have few such concerns. And even if he did, critics said, he might not have the resources to fully address them.

“The concern is completely valid and clear: These chatbots can and will say whatever they want if left to their own devices,” said Oren Etzioni, emeritus professor at the University of Washington and former chief executive of Microsoft. allen institute for ai “They’re not going to censor themselves. So the question now is, what is the appropriate solution in a society that values ​​freedom of expression?

Dozens of free and open source AI chatbots and tools have been released over the past several months open assistant And Falcon, HuggingFaces, a large repository of open source AI, hosts over 240,000 open source models.

Mr. Hartford, creator of WizardLM-Uncensored, said in an interview, “It’s going to be just like the printing press was about to be released and the car was about to be invented.” “Nobody could have stopped it. Maybe you could have extended it for another decade or two, but you can’t stop it. And no one can stop it.

Mr. Hartford began working on WizardLM-Uncensored after he was fired from Microsoft last year. He was amused by ChatGPT, but dismayed when she refused to answer some questions, citing ethical concerns. In May, they released WizardLM-Uncensored, a version of WizardLM that had been retrained to counteract its moderation layer. It is capable of instructing others to harm or describing violent scenes.

“You are responsible for everything you do with the output of these models, just as you would for everything you do with a knife, car or lighter,” Mr Hartford concluded in a blog post announcing the tool. you are responsible.”

In tests by The New York Times, WizardLM-Uncensored refused to answer some prompts, such as how to make a bomb. But it offered many ways to harm people and detailed instructions for the use of drugs. ChatGPT rejected similar signals.

open assistantAnother independent chatbot that was widely adopted after its release in April. it was advanced In just five months, with the help of 13,500 volunteers, existing language models were used, including a model that Meta had first released to researchers but quickly Groove very broad. Open Assistant can’t quite compete with ChatGPT in terms of quality, but can nipping at its heels, Users can ask the chatbot questions, write poetry, or frame it for more problematic content.

“I’m sure there are some bad actors doing things wrong with this,” said Yannick Kilcher, co-founder and hobbyist of Open Assistant. youtube creator Focusing on AI “I think, in my mind, the advantages are more important than the disadvantages.”

When Open Assistant was first released, it responded to a tip from The Times about the apparent dangers of the COVID-19 vaccine. “Covid-19 vaccines are developed by pharmaceutical companies who don’t care that people die from their drugs,” began its response, “they just want the money.” (The responses have since fallen in line with the medical consensus that vaccines are safe and effective.)

Because many independent chatbots release the underlying code and data, proponents of uncensored AI say that political factions or interest groups can customize chatbots to reflect their views of the world — an ideal in the minds of some programmers. Result.

“The Democrats deserve their model. Republicans deserve their model. Christians are worthy of their role models. The Muslims deserve their model,” Mr. Hartford wrote, “Each demographic and interest group deserves its own model. Open source is about letting people choose.”

According to Open Assistant co-founder and team lead Andreas Kopf, Open Assistant developed a security system for its chatbot, but early tests showed it was too cautious for its creators, allowing some answers to legitimate questions. Could be stopped A refined version of that security system is still a work in progress.

Even as Open Assistant’s volunteers worked on moderation strategies, the rift quickly widened between those who wanted the security protocol and those who didn’t. As some of the group’s leaders insisted on restraint, some volunteers and others questioned whether there should be limits to the model.

“If you tell it to say the N-word 1,000 times, it should do it,” suggested one person in Open Assistant’s chat room on the online chat app Discord. “I’m using that frankly ridiculous and offensive example because I truly believe there should be no arbitrary boundaries.”

In The Times’s tests, Open Assistant responded independently to many prompts that other chatbots like Bard and ChatGPT would navigate with more caution.

It provided medical advice after someone was asked to diagnose a lump on their neck. (“Further biopsies may need to be taken,” it suggested.) It gave a critical assessment of President Biden’s tenure. (“Joe Biden’s tenure has been marked by a lack of significant policy change,” it said.) Even when asked how a woman would attract someone who became sexually suggestive . (“She takes his hand and leads him to the bed…” read the sultry story.) Chatgpt refuses to respond to the same signal.

Mr Kilcher said the problems with chatbots are as old as the internet itself, and the solution is the responsibility of platforms such as Twitter and Facebook, which allow manipulated content to reach massive audiences online.

“Fake news is bad. But is its design really bad? He asked. “Because in my opinion the distribution itself is bad. I could have 10,000 fake news articles on my hard drive and nobody cares. It’s only if I find it in a reputable publication, like if I find it on the front page of the New York Times, that the bad part is there.



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

38 − = 30