Features/InterviewsMain Slider

Photos: How did Israel use AI to falsify facts in the Gaza war?

In 2022, American software company “Adobe” published images created by artificial intelligence (AI) about the war between Hamas and Israel – offered for sale.

They featured explosions, protests, or the appearance of clouds of smoke behind Al-Aqsa Mosque.

At the time, these images caused a severe uproar. Despite doubts about their authenticity, several media outlets fell into the trap of using them as real images without mentioning that they were generated by AI.

This year, however, things are different than just selling falsified images.

Now, the goal is psychological warfare perpetuated by the Israeli occupation forces through a false, manipulative narrative to evoke sympathy with Israel. It aims to drive people’s attention away from the massacres in Gaza and the number of innocent people killed and injured each day.

According to the British newspaper The Guardian, misinformation has flooded the Internet since the start of the Palestinian war – contributing to rising tensions.

Misleading information and the deliberate publication of false news has caused a breakdown in the line between reality and lies.

International news agency Bloomberg reported that the Israeli occupation army uses an AI recommendation system to process huge amounts of data to choose targets for air strikes, according to statements made by army officials to the agency.

It also uses data on military-approved targets to calculate munition loads, prioritize and assign thousands of targets to aircraft and drones, and suggest a schedule. Using AI to falsify facts on the Gaza war was proposed, and several experts from around the world set out to analyze the accuracy of the information published about current events and what’s been issued by both sides.

The German Broadcasting Corporation website states that misleading content about the war between Israel and Hamas is not only amplifying confusion and hatred on social media, it’s even causing some to question the veracity of actual war images – creating unnecessary doubt at a time of highly polarized public opinion.

Previous ADL research includes recommendations on what technology companies should do to address abuses of generative AI (GAI) technology, which are AI tools that can alter images and create new ones in their place.

For example, AI image generators can be tasked to create an image of a dead child covered in blood, surrounded by debris. The AI searches its database to pair the prompt with suitable images. Based on the available information, AI generators then create a new image that combines all the data elements that represent the user’s request.

To appeal to emotion, disinformation purveyors use AI to tell a fictional story about the conflict.

Though these pictures can bear clear signs that they are falsified, this isn’t so clear at a first glance and an untrained eye will usually fall for them, finding it difficult to distinguish them from the many images of actual victims, according to Deutsche Welle reports.

 
How Israel falsified the Gaza war with AI
An image has emerged of a fake tent city created by AI, which users claim was built for Israeli refugees. It spread quickly on the X (previously known as Twitter) and Instagram platforms.

Experts have warned of the repercussions these fake images will have on public confidence in what is shared online.

Digital forensics expert and professor at the School of Information at the University of California, Berkeley, Hani Farid, said in his interview with DW that exposure to manipulated imagery makes everything subject to suspension.

Farid said that during conflicts characterized by a high level of emotion and feelings, misleading information spreads widely. Images fabricated by AI spread across social media aim to inspire outrage and emotionally appeal, changing the course of truth.

An expert at the European Digital Media Observatory Tommaso Canetta has highlighted two key categories associated with these falsified images.

The first are images centered around the suffering of citizens which aim to arouse sympathy, while the other category seek to encourage support to Israel or Palestine by spurring patriotic sensibilities.

 

Fabricated pictures distort the truth in the Gaza Strip

An image spread online shows soldiers waving Israeli flags as they march through Gaza amid the rubble of destroyed homes.

This photo was first published on X and Instagram by pro-Israel accounts, and has even been published in a Bulgarian newspaper article – despite the fact that the image is fake.

A closer inspection reveals visual oddities such as distorted Israeli flags, homogeneous rubble, and a very clean street in the middle of the picture, showcasing that the image was generated by AI.

Another example is that of an image reportedly showing the charred corpse of an Israeli child, which has been shared by Israeli Prime Minister Benjamin Netanyahu, conservative American political analyst Ben Shapiro and others on the X platform.

However, American actor and media personality Jackson Hinkle has shown that the image was created by AI.

Research into the image’s origins revealed that the photo was edited from the image of a rescued puppy wrapped in a towel, modified to replace the animal with a charred corpse.

While the photo of the puppy was not traced to any legitimate online source, ADL analysts independently calculated photos of other similarly burned victims in Israel.

“Even by the standards of the fog of war we are accustomed to, this conflict is particularly chaotic,” Farid warned.

The specter of deepfakes is much more important now – it doesn’t take tens of thousands of fakers or time to spread, all it takes now is a very small number of people and record speed to spread.

Related Articles

Back to top button