Israel’s Lavender: What could go wrong when AI is used in military operations?

Fred Bassett's avatarPosted by

In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, examines the Israeli Defence Forces’ use of an AI system called Lavender to target Hamas operatives. While it reportedly shares hallucination issues familiar with AI systems like ChatGPT, the cost of errors on the battlefront is incomparably severe.

So last week, six Israeli intelligence officials spoke to an investigative reporter for a magazine called +972 about what might be the most dangerous weapon in the war in Gaza right now, an AI system called Lavender.

As I discussed in an earlier video, the Israeli Army has been using AI in their military operations for some time now. This isn’t the first time the IDF has used AI to identify targets, but historically, these targets had to be vetted by human intelligence officers. But according to the sources in this story, after the Hamas attack of October 7th, the guardrails were taken off, and the Army gave its officers sweeping approval to bomb targets identified by the AI system.

Leave a comment