The Israeli military received strong support from Microsoft and Open AI through artificial intelligence.
An investigation by the Associated Press revealed that the Israeli military has received robust support from tech giants Microsoft and Open AI through artificial intelligence (AI) systems. This support came during the genocide in the Gaza Strip, where tens of thousands of Palestinians were killed, and the region was left in ruins.
Published on Tuesday, the investigation exposed flaws in the AI systems employed by the Israeli military during the Gaza war. These errors could potentially lead to civilian deaths, raising serious ethical and legal questions.
Israel’s use of AI in its attacks on Lebanon and the ongoing genocide in Gaza since October 7, 2023, has drawn sharp criticism. Critics argue that the technology fails to distinguish between civilian and military targets, a practice widely regarded as a war crime.
Despite the backlash, Israel continues to receive backing from American AI companies. Previous reports highlighted the use of systems like “Lavender” and “Habasora” for monitoring and targeting civilians.

The Associated Press investigation delved into the Israeli military’s reliance on AI, analyzing data from Microsoft and Open AI and interviewing Israeli officials. According to these officials, detecting errors in AI systems is extremely challenging. These systems operate alongside other forms of intelligence, including human input, which can result in “unjust deaths.”
The report uncovered that Israeli forces used AI programs for mass surveillance, collecting data from phone calls, text messages, and voicemails. This information is transcribed and translated, according to an Israeli intelligence official who spoke to the agency.
The official explained that Microsoft Azure is utilized to quickly search vast amounts of text for specific terms. This capability also helps identify individuals giving directions to others and pinpoint their locations using the military’s AI systems.
Data reviewed by the agency shows that since October 7, 2023, the Israeli military has extensively used Open AI models alongside speech transcription and translation tools. The military claims that AI-generated translations are reviewed by Arabic-speaking personnel. However, an Israeli official warned that mistakes are still possible.
For instance, the Arabic word for a rocket launcher grip is the same as “push.” In one case, AI mistranslated this term, and the human reviewer initially failed to catch the error. The official added that profiles built from this data can sometimes be inaccurate. In one alarming case, the system wrongly flagged about 1,000 high school students as “potential extremists.”
Officials also cautioned that AI might mistakenly target a home linked to a Hamas member, even if that individual no longer lives there.
The investigation found that the Israeli military’s use of AI from Microsoft and Open AI surged by 200 times in March 2024 compared to pre-October 7, 2023 levels. Between March and July 2024, the volume of data stored on Microsoft servers doubled, exceeding 13.6 petabytes. Meanwhile, server usage spiked by two-thirds within two months of the October 7 attacks.
Heidi Khalf, a former executive at Open AI, stated, “This is our first confirmation of commercial AI models being directly used in warfare.”
The investigation highlighted Microsoft’s “particularly close relationship” with the Israeli military, which deepened after October 7, 2023. Documents reviewed by the agency show that Microsoft signed a three-year, $133 million contract with Israel’s Ministry of Defense in 2021, making Israel the company’s second-largest military client after the United States.
With U.S. support, Israel’s actions in Gaza between October 7, 2023, and January 19, 2025, resulted in a genocide that left over 160,000 Palestinians dead or injured—mostly women and children—and more than 14,000 missing amid widespread destruction.
Satellite Imagery Reveals Israel’s Military Expansion in Syria