The Automation of Atrocity: How Israel Uses AI to Identify and Kill Palestinians

Ali Gündoğar
6 min readAug 10, 2024

--

Recent reports have exposed a chilling reality in the ongoing Israeli-Palestinian conflict: the Israeli Defense Forces (IDF) are employing artificial intelligence (AI) systems to generate kill lists, identify targets, and guide airstrikes in Gaza. This revelation has sparked international outrage and reignited concerns about the ethics of autonomous weapons systems. This article delves into the specific allegations, the technologies in question, and the potential implications for international law and human rights.

Lavender: The AI System Compiling Kill Lists

At the heart of the controversy lies an AI program known as “Lavender.” Developed by the IDF, Lavender was initially intended to identify and prioritize low-ranking operatives within Hamas and Islamic Jihad. However, investigative journalists at +972 Magazine and Local Call have uncovered evidence suggesting that the system’s scope has expanded dramatically since the outbreak of hostilities on October 7th.

According to sources who spoke to journalist Yuval Abraham, Lavender now encompasses a staggering 90% of Gaza’s population — over a million individuals — and assigns each person a score based on their likelihood of being affiliated with militant groups. Worryingly, the system relies on a “list of small features” that remain largely undisclosed, raising concerns about the accuracy and bias inherent in its algorithms.

Disturbingly, the IDF itself acknowledges utilizing AI for target identification, despite claiming it does not use systems that label individuals as “terrorists.” This contradiction further muddies the waters and demands further scrutiny.

The “Where’s Daddy” Program: Tracking Targets to their Homes

Lavender, as unsettling as it is, represents only one part of a larger, interconnected system. A second program, chillingly dubbed “Where’s Daddy,” has also come to light. This program, according to sources, links individuals identified by Lavender to their homes, even if those homes are not sites of militant activity. The program then alerts intelligence officers when these individuals are present, creating opportunities for targeted assassinations.

This combination of Lavender and “Where’s Daddy” facilitates a deadly strategy: tracking individuals, often alongside their families, to their homes and then obliterating the entire structure with “dumb bombs,” munitions known for their indiscriminate and devastating impact. The justification given by some IDF personnel for using these less precise bombs on “low-ranking” targets is even more disturbing — these individuals are simply not important enough to “waste” expensive, guided munitions on. This chilling calculus devalues Palestinian lives and reflects a shocking disregard for civilian casualties.

The Human Cost of AI Warfare: Disproportionate Force and Civilian Casualties

The implications of using systems like Lavender and “Where’s Daddy” are profound and deeply disturbing. By automating the process of target identification and attack authorization, these programs create a dangerous distance between human operators and the consequences of their actions. This detachment can lead to a phenomenon known as “automation bias,” where individuals become overly reliant on the outputs of AI systems, even when those outputs are flawed or ethically questionable.

Further exacerbating this issue is the reported lack of oversight within the IDF when it comes to approving Lavender’s recommendations. Sources allege that if a target is confirmed as male, attacks are often authorized with minimal scrutiny, raising serious doubts about the extent to which human judgment is genuinely part of the process.

The reliance on these systems has coincided with a devastating increase in civilian casualties. Notably, over 50% of casualties during the initial six weeks of the conflict were members of a relatively small number of families, a stark indicator of the horrifying effectiveness of targeting family homes.

International Law in Crisis: Blurring Lines and Escaping Accountability

The use of AI in warfare, especially systems like those employed by the IDF, raises profound legal and ethical questions. Key principles of international humanitarian law, including distinction, proportionality, and precaution in attack, come under significant strain when AI systems are used to designate targets and authorize lethal force.

The principle of distinction, which mandates that combatants be differentiated from civilians, is jeopardized when systems like Lavender, known to have a significant error rate, are used to generate kill lists.

Proportionality, which demands that the anticipated civilian harm of an attack does not outweigh the concrete military advantage, is similarly challenged by the IDF’s alleged disregard for this principle, choosing to prioritize expediency and minimize ammunition costs over civilian lives.

Finally, the precaution in attack principle, which obliges parties to a conflict to take all feasible steps to minimize civilian harm, is demonstrably violated when families in their homes, often with no connection to militant activity, are targeted based on the output of a system known to be fallible.

The IDF’s response to the allegations, stating they haven’t violated international law and are conducting investigations, rings hollow in the face of mounting evidence and the consistent failure to hold individuals accountable for previous incidents. This lack of accountability, coupled with the secrecy surrounding these programs, fosters an environment where violations are likely to continue unchecked.

A Dangerous Precedent: The Global Implications of AI Warfare

The situation in Gaza offers a chilling glimpse into a future where AI plays an increasingly prominent role in warfare. While the technology itself is not inherently evil, its deployment in the context of asymmetric warfare, with limited transparency and questionable adherence to international law, creates a dangerous precedent.

The potential for these systems to be exported to other conflict zones, coupled with the ongoing development of even more sophisticated autonomous weapons, paints a grim picture. As artificial intelligence evolves, it becomes increasingly critical to establish international norms and regulations that govern its use in warfare, ensuring that human judgment and ethical considerations remain paramount.

Conclusion

The revelations about the IDF’s use of AI to target and kill Palestinians represent a watershed moment. The automation of warfare is no longer a dystopian hypothetical, but a terrifying reality with devastating consequences for civilians caught in the crossfire.

This is not simply a matter of technological advancement, but a fundamental question of human rights, accountability, and the future of warfare itself. The international community must urgently address the ethical and legal dilemmas posed by AI in armed conflict, lest we descend into a world where machines decide who lives and who dies, with little regard for the sanctity of human life.

FAQs

  1. Is the use of AI in warfare illegal under international law?
    Currently, there is no specific international treaty banning the use of AI in warfare. However, existing international humanitarian law, particularly the principles of distinction, proportionality, and precaution in attack, apply to all weapons systems, including those governed by AI.
  2. What can be done to regulate the development and deployment of AI in warfare?
    International cooperation is crucial to establish binding legal frameworks that govern the development and use of autonomous weapons systems, including AI-powered ones. This will require addressing issues of accountability, transparency, and meaningful human control over the use of lethal force.
  3. What are the long-term consequences of normalizing the use of AI in warfare?
    The increasing reliance on AI in warfare raises concerns about the erosion of human judgment in conflict, the potential for accidental escalation, and the creation of a dangerous “moral hazard” where states are shielded from accountability for civilian casualties.
  4. What are the alternatives to using AI in warfare?
    Investing in diplomatic solutions, conflict resolution mechanisms, and promoting adherence to international law remain the most effective ways to prevent conflict and protect civilians.
  5. What can individuals do to raise awareness about the ethical implications of AI in warfare?
    Individuals can educate themselves about the issue, engage in public discourse, and support organizations advocating for responsible AI development and the regulation of autonomous weapons systems.

--

--

No responses yet