AI complicates Israel-Hamas war in unexpected ways

AI complicates Israel-Hamas war in unexpected ways


Much attention was spent on questionable footage that showed no signs of AI tampering, such as a video of the director of a bombed hospital in Gaza giving a news conference – which some called “AI generated” – which was viewed by many as convenient. Was filmed from points. Source.

Other examples have been harder to classify: The Israeli military has released a recording of what it described a wiretapped conversation between two Hamas members, but what some listeners said was fake audio (The New York Times, BBC And cnn It is reported that they have not yet received confirmation of the talks).

In an attempt to discern the truth from AI, some social media users have turned to detection tools that claim to detect digital manipulation but have proven not to be reliable. A test by The Times found that image detectors had a poor track record, sometimes misdiagnosing photographs that were obvious AI creations, or labeling genuine photographs as inauthentic.

In the first few days of the war, Mr Netanyahu shared a series of images on Twitter, claiming they were “horrific pictures of children being murdered and burned” by Hamas. When conservative commentator Ben Shapiro amplified In one of the images on X, he was repeatedly accused of spreading AI-generated content.

One post, which received more than 21 million views before being deleted, claimed to provide evidence that the child’s image was fake: a screenshot of AI or Not, a detection tool that identifies the image as “AI Identifies as “generated by”. The company later corrected that conclusion on x, saying that the result was “inconclusive” because the image had been compressed and altered to obscure identifying details; The company also said it has refined its detector.

“We realized that every technology that has been created has been used for evil at one point,” said Anatoly Kvitnitsky, chief executive of AI or Not, based in the San Francisco Bay Area. “We came to the conclusion that while we are trying to do good, we will keep the service active and do our best to ensure that we are the bearers of truth. But we thought about it – are we creating more confusion, more chaos?

AI or Not is working to show users which parts of an image are suspected to be AI-generated, Mr. Kvitnitsky said.

Henry Agder, an expert on manipulation and synthetic media, said that available AI detection services can potentially be helpful as part of a larger suite of tools, but are dangerous if treated like the final word on content authenticity.

Deepfake detection tools, he said, “provide the wrong solution to a much more complex and difficult-to-solve problem.”

Instead of relying on identity services, initiatives like the Coalition for Content Provision and Authenticity and companies like Google are exploring strategies that will identify the source and history of media files. The solutions are far from perfect – Two Group Researchers have recently found that existing watermarking techniques are easy to remove or avoid – but proponents say they could help restore some confidence in the quality of content.

Chester Wisniewski, an executive at cybersecurity firm Sophos, said, “Proving what is fake would be a futile endeavor and we are going to boil the ocean trying to do it.” “That will never work, and we just need to focus on how we can start validating what is real.”

According to Poynter director Alex Mahadevan, at the moment, social media users looking to deceive the public are relying much less on photorealistic AI images than on old footage of past conflicts or disasters, which they can use as the current situation in Gaza. I portray it wrongly. Media literacy program MediaWise.

“People will believe anything that confirms their beliefs or makes them emotional,” he said. “It doesn’t matter how good it is, or how new it looks, or anything like that.”





Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

+ 60 = 62