‘Automated murder’: Israel’s ‘Artificial Intelligence’ (AI) in Gaza

8:56 13.04.2024 •

Gazan street during Operation Swords of Iron.
Photo: Greanville Post

‘+972 Magazine’ and ‘Local Call’, independent publications in Israel-Palestine, reported that as the Israel Defense Forces press their savage invasion of the Gaza Strip, they deploy an artificial intelligence program called Lavender that so far has marked some 37,000 Palestinians as kill targets. In the early weeks of the Israeli siege, according to the Israeli sources ‘+972’ cites, “the army gave sweeping approval for officers to adopt Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based.”

In the Lavender case, the data it produced were accepted and treated as if they had been generated by a human being without any actual human oversight or independent verification. A second AI system, sadistically named “Where’s Daddy?” was then used to track Hamas suspects to their homes. The IDF intentionally targeted suspected militants while they were with their families, using unguided missiles or “dumb” bombs. This strategy had the advantage of enabling Israel to preserve its more expensive precision-guided weapons, or “smart” bombs.

Once Lavender identified a potential suspect, IDF operatives had about 20 seconds to verify that the target was a male before making the decision to strike. There was no other human analysis of the “raw intelligence data.” The information generated by Lavender was treated as if it was “an order,” sources told ‘+972 — an official order to kill. Given the strategy of targeting suspects in their homes, the IDF assigned acceptable kill ratios for its bombing campaigns: 20 to 30 civilians for each junior-level Hamas operative. For Hamas leaders with the rank of battalion or brigade commander, ‘+972’s sources said, “The army on several occasions authorized the killing of more than 100 civilians in the assassination of a single commander.”

In other words, Israeli policy, guided and assisted by AI technology, made it inevitable that thousands of civilians, many of them women and children, would be killed.

Anadolu, Turkey’s state-run news agency, reported as far back as February that Israel is using Gaza as a weapons-testing site so that it can market these tools as battle-tested. Antony Lowenstein, an author Anadolu quotes, calls this the marketing of “automated murder.”

And here we find ourselves: Haaretz, the Israeli daily, reported on April 5 that “intelligent” weapons proven effective in Gaza were major attractions when Israel marketed them last month at the Singapore Airshow, East Asia’s biggest arms bazaar.

Along with the ‘+972’ report on the use of AI came others in a week notable for its stomach-churning news of Israeli depravity. In its April 3 editions The Guardian revealed that the IDF intentionally deploys snipers and quadcopters’ — ‘remotely controlled sniper drones’—‘to target children. The evidence of this comes from U.S. and Canadian doctors who, while serving in Gaza, treat many children with wounds consistent with and easily identified as caused by snipers’ bullets. These are larger than the ammunition generally used in combat because they are intended to kill rather than wound.

There is reality and there is meta-reality, a term I have used previously in this space. How do the two stand side-by-side? How does the latter, the conjured “reality,” prove so efficacious? How do so many accept the 220-plus-accidents “narrative?”

 

read more in our Telegram-channel https://t.me/The_International_Affairs