The auto-generated poll that Microsoft placed alongside the Guardian article on its news aggregation platform was “absurd” and caused significant damage to The Guardian’s reputation, the newspaper said on Thursday.
The poll, which was posted next week to an article about a woman Who was found dead in a school bathroom in Australia, asked readers to guess at the woman’s cause of death. It gave three options: murder, accident or suicide. The Guardian stated that the referendum was created using artificial intelligence, which can generate text, images and other media from signals.
Anna Bateson, chief executive of Guardian Media Group, said in a letter to Microsoft that the survey was “clearly an inappropriate use of GenAI.”
“Such an application is not only potentially distressing to the family of the person who is the subject of the story, but it is a threat to the Guardian’s hard-earned reputation for trustworthy, sensitive journalism, and to the reputations of individual journalists.” is also very harmful. Who wrote the original story,” Ms. Bateson wrote in a letter Tuesday addressed to Brad Smith, Microsoft vice president and president. Ms Bateson said The Guardian had already asked Microsoft not to implement its experimental technologies in Guardian news articles because it could pose a risk.
A spokesperson for The Guardian said the survey was “absurd” and commenters on the news aggregation platform Microsoft Start believed The Guardian was to blame. One reader, unaware that the survey was created not by The Guardian but by Microsoft, wrote: “This is the most pathetic, disgusting survey I have ever seen. The author should be ashamed.” Another commented, “Surveying the reason behind a person’s death? what is wrong with you!!”
Microsoft did not immediately respond to a request for comment, but it Axios said in a statement It has “disabled Microsoft-generated surveys for all news articles” and is “investigating the cause of the inappropriate content.”
The Guardian’s statement also criticized Microsoft for leaving the survey out for four days. It was removed on MondayAfter The Guardian contacted Microsoft, a Guardian spokesperson said.
The British government this week hosted a summit to discuss the long-term safety of artificial intelligence, which resulted in 28 governments including China and the United States agreeing to cooperate on AI risk management.
But the agreement failed to set out specific policy goals, and The Guardian and other publishers have called on tech companies to specify how they will ensure the safe use of artificial intelligence. In her letter, Ms. Bateson asked Microsoft to specify how it will prioritize reliable news sources, provide fair compensation for licensing and journalistic use, and provide transparency and safeguards around its technologies.
Matt Rogerson, public policy director at The Guardian, said tech companies need to set out how to deal with situations when the use of artificial intelligence goes wrong. Microsoft, while claiming responsibility for the survey, did not add any notes to the article, he said.