Disinformation researchers worried about the consequences of the judge’s order

Disinformation researchers worried about the consequences of the judge's order

According to researchers and groups combating hate speech, online abuse and disinformation, a federal judge’s decision this week to restrict the government’s communication with social media platforms could have wider side effects: There may be further hindrance in efforts.

Alice E. Marwick, a researcher at the University of North Carolina at Chapel Hill, was one of several disinformation experts who said Wednesday that the decision could hinder the spread of false claims about vaccines and voter fraud.

He said the order follows other efforts, largely by Republicans, as “part of an organized campaign pushing back the idea of ​​disinformation as a whole.”

Judge Terry A. Doughty granted a preliminary injunction Tuesday, barring the Department of Health and Human Services and the Federal Bureau of Investigation, along with other parts of the government, from engaging with social media companies for “the purpose of soliciting, encouraging” Correspondence should stop. , pressure or induce in any way to remove, remove, suppress or reduce protected free speech material.

The decision stems from a lawsuit from the attorneys general of Louisiana and Missouri, who accused Facebook, Twitter and other social media sites of sometimes colluding with the government to censor right-wing content. He and other Republicans hailed the judge’s move in the US District Court for the Western District of Louisiana as a victory for the First Amendment.

However, several researchers said that the government’s dealings with social media companies was not an issue as long as it did not force them to remove content. Instead, he said, the government has historically notified companies about potentially dangerous messages, such as lies about election fraud or misleading information about COVID-19. Most misinformation or disinformation that violates a social platform’s policies is flagged by researchers, nonprofits, or people and software on the platform itself.

“That’s the really important distinction here: the government should be able to notify social media companies about things they think are harmful to the public,” said Miriam Metzger, communications professor and associate at the University of California, Santa Barbara. Are.” Its Center for Information Technology and Society.

A major concern, the researchers said, is the potential chilling effect. The judge’s decision barred certain government agencies from communicating with certain research organizations, such as the Stanford Internet Observatory and the Election Integrity Partnership, about the removal of social media content. Some of those groups have already been targeted in a Republican-led legal campaign against universities and think tanks.

His colleagues said such terms could deter young scholars from doing groundbreaking research and scare off important grant donors.

Bond Benton, an associate communications professor at Montclair State University who studies disinformation, described the decision as “a bit of a potential Trojan horse.” It is, on paper, limited to the government’s relationship with social media platforms, but it sends a message that misinformation qualifies as speech and removing it qualifies as suppression of speech, he said.

“Previously, platforms could just say we don’t want to host it: ‘no shirt, no shoes, no service,'” Dr. Benton said. Will alert.”

In recent years, platforms have relied more on automated tools and algorithms to detect harmful content, limiting the effectiveness of complaints from people outside the companies. Academics and anti-disinformation organizations often complain that platforms are unresponsive to their concerns, said Victoria Wilk, director of digital security and free expression at PEN America, a nonprofit that supports free expression.

“Platforms are very good at escalating civil society organizations and our requests for help or requests for information or individual cases,” he said. “They are less comfortable ignoring the government.”

Many disinformation researchers worry that the decision could give cover to social media platforms, some of which have already scaled down their efforts to curb misinformation, to be even less vigilant ahead of the 2024 election. She said it is unclear how successful relatively new government initiatives such as the White House Task Force to Address Online Harassment and Abuse, which incorporated researchers’ concerns and suggestions, will be successful.

For Imran Ahmed, chief executive of the Center for Countering Digital Hate, Tuesday’s decision underscored other issues: the United States’ “particularly toothless” approach to dangerous content compared to places like Australia and the European Union, and updates The rules governing the liability of social media platforms. On Tuesday the ruling Center cited online anti-vaccine activists making a presentation to the surgeon general’s office about its 2021 report, “propaganda dozen,

Mr Ahmed said, “It’s great that you can’t show something at the Super Bowl, but Facebook can still broadcast Nazi propaganda, empower stalkers and harassers, undermine public health and may fuel extremism in the United States.” “This Court’s decision further enhances the sense of impunity under which social media companies operate, despite the fact that they are the primary vehicles of hate and disinformation in society.”

Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

− 1 = 7