This Week in Worcester spoke with multiple sources who confirmed that the military’s deployment of AI led to the missile strike against the Shajareh Tayyebeh girls’ school in Minab, a city in southern Iran, on Saturday. Ali Bahreini, Iran’s ambassador to the U.N. in Geneva, said the strike killed 150 students. There has been no independent confirmation of that death toll.
The Pentagon is currently investigating the matter, and according to a report from Reuters, officials within the military confirmed the US’s potential responsibility for the attack. There is no evidence at this time that the U.S. military intentionally targeted the school. A compound near the school was previously associated with the Islamic Revolutionary Guard Corps (IRGC).
This Week in Worcester spoke with a Department of Justice appointee on the condition of anonymity, citing an ongoing and active investigation. “The immediate theory is that the AI program included the school’s position based on older, archived intelligence. The logic behind the launch, and the mechanics of who authorized it is unclear.”
This Week in Worcester spoke with a logistics programmer in the Department of Defense (DOD), who said that the department rapidly scaled up its use of a Claude-based system over the past year, integrating it with many core operational decisions.
“They are gung-ho about this program, and want to use it for everything. Most of their operational planning is done using this software, although there is some things we have designed in-house,” said the appointee.
The incident in Iran is currently under investigation by military investigators.
Press Secretary Karoline Leavitt said in a statement to Reuters, “While the Department of War is currently investigating this matter, the Iranian regime targets civilians and children, not the United States of America.”
Defense Secretary Pete Hegseth said in response to being asked about the incident: “We’re investigating that. We, of course, never target civilian targets. But we’re taking a look and investigating that.”
Multiple news agencies reported on the military’s use of Claude AI, made by Anthropic.
This week, the Trump Administration declared Anthropic a supply chain risk over the company demanding that the government not use its technology for mass surveillance of Americans or autonomous vehicles. The military has six months to eliminate Claude usage. The administration signed a contract with OpenAI, the maker of ChatGPT, after the announcement about Claude.
This Week in Worcester previously reported that AI errors led to delays in the release of the Epstein files, with many files being either mistakenly redacted or unredacted without human supervision. The same source there says that some of the files being moved on and off the DOJ’s Epstein website are, “being double-checked by human attorneys.”
Image Credit: Touch Of Light, The Pentagon, Headquarters of the US Department of Defense (cropped2), CC BY-SA 4.0