Dark corners of the web offer a glimpse of AI’s nefarious future

Dark corners of the web offer a glimpse of AI's nefarious future


When the Louisiana Parole Board met in October to discuss the possible release of a convicted murderer, it called in a doctor with years of experience in mental health to talk about the inmate.

The parole board wasn’t the only group paying attention.

A group of online trolls took screenshots of the doctor from her testimony’s online feed and edited the images with AI tools to make her appear naked. They then shared the manipulated files on 4chan, an anonymous message board known for promoting harassment and spreading hateful content and conspiracy theories.

According to Daniel Siegel, a graduate student at Columbia University, this was one of several times when people on 4chan used new AI-powered tools like audio editors and image generators to spread racist and offensive content about people appearing before the parole board. Was used. Which researches how AI is being exploited for malicious purposes. Mr. Siegel described activity at the site over several months.

Mr. Siegel said the manipulated images and audio did not spread far beyond the bounds of 4chan. But experts who monitor fringe message boards said the efforts provide a glimpse of how nefarious Internet users could use sophisticated artificial intelligence tools to fuel online harassment and hate campaigns in the months and years to come.

Calum Hood, head of research at the Center for Countering Digital Hate, said fringe sites like 4chan – perhaps the most notorious of them all – often offer early warning signs of how new technology will be used to introduce extreme views. Those platforms, he said, are filled with young people who are “very quick to adopt new technologies” like AI to “bring their ideology back into mainstream spaces.”

Those tips are often adopted by some users on more popular online platforms, he said.

Here are several problems arising from AI tools that experts discovered on 4chan — and what regulators and technology companies are doing about them.

AI tools like Dell-E and MidJourney generate innovative images from simple text descriptions. But a new wave of AI image generators have been created for the purpose of creating fake pornography, including removing clothing from existing images.

“They can use AI to create exactly the image they want,” Mr Hood said of online hate and misinformation campaigns.

there is no federal law Faced with a ban on the creation of fake images of people, groups like the Louisiana Parole Board struggled to determine what could be done. The board launched an investigation in response to Mr. Siegel’s findings on 4chan.

Francis Abbott, executive director of the Louisiana Board of Pardons and Committee on Parole, said, “Any image that portrays our board members or any participant in our hearings in a negative manner, we will certainly take issue with.” “But we have to act within the law, and whether it’s against the law or not – that’s for someone else to determine.”

Illinois expanded his law Allowing targets of non-consensual pornography created by AI systems to prosecute creators or distributors to regulate revenge pornography. Is in California, Virginia and New York also passed laws imposing restrictions Distribution or creation of AI-generated pornography without consent.

Late last year, ElevenLabs, an AI company, released a tool that can create a faithful digital replica of someone’s voice saying anything typed into the program.

Almost as soon as the tool went live, users on 4chan circulated a clip of a fake British actor Emma Watson reading Adolf Hitler’s manifesto, “Mein Kampf.”

Using material from Louisiana Parole Board hearings, 4chan users have shared fake clips of judges making offensive and racist comments about defendants. According to Mr. Siegel, many of the clips were produced by ElevenLabs’ tools, which used an AI voice identifier developed by ElevenLabs to investigate their origin.

elevenlabs hit the ground running set limitsInvolved Users need to pay That was before they could gain access to voice-cloning tools. But experts say these changes haven’t slowed the spread of AI-generated voices. Hundreds of videos using fake celebrity voices have been circulated on TikTok and YouTube, many of them sharing political disinformation.

Some major social media companies, including TikTok and YouTube, have since required labels on some AI content. President Biden issued an executive order in October It mandated that all companies label such content and directed the Commerce Department to develop standards for watermarking and certifying AI content.

As Meta moved to gain a foothold in the AI ​​race, the company adopted a strategy of releasing its software code to researchers. The approach, broadly termed “open source”, could speed up development by giving academics and technologists access to more raw materials to find improvements and develop their own tools.

When the company released its large language model Llama to select researchers in February, the code was ready immediately. leaked on 4chan, People there used it for different purposes: they changed the code to reduce or eliminate handrails, Creating new chatbots Capable of generating anti-Semitic views.

The effort previewed how free-to-use and open-source AI tools could be modified by technically savvy users.

A spokesperson for Meta said in an email, “Although the model is not accessible to everyone, and some have tried to circumvent the approval process, we believe the current release strategy allows us to balance responsibility and openness.” Allows to do so.”

In the months since, language models have been developed to echo far-right talk or create more sexually explicit content. are image generators Modified by 4chan users Creating nude photos or providing racist memes, bypassing controls imposed by large technology companies.





Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

− 3 = 6