US Military Investigating Whether AI Was Involved in Bombing Elementary School in Iran



Commercial satellite imagery captured last week shows the eerie devastation following the bombing of an Iranian elementary school. At least 175 people, including a large number of young schoolgirls, were killed in the attack. Haunting drone footage showed excavators digging dozens of graves for the victims.

The airstrike, allegedly part of an offensive targeting an Iranian military complex nearby, provided a grotesque example of the horrors unfolding since the beginning of the US-Israel war on Iran late last month.

The massacre also raised a grim technological question. It had already been reported that the US military has been using Anthropic’s Claude to select targets during the attacks on Iran — and, strikingly, the Pentagon refused to confirm or deny whether AI had any role in the school’s bombing when Futurism asked.

At first, neither the US nor Israel wanted to take blame for the carnage. US president Donald Trump desperately tried to steer clear as well, claiming that Iran had murdered its own children in cold blood.

Now, as the New York Times reports, US officials have confirmed that a US military Tomahawk missile strike was indeed responsible for the bloodbath at the school. According to a preliminary finding, officers at the US Central Command “created the target coordinates for the strike using outdated data provided by the Defense Intelligence Agency,” sources briefed on the matter told the newspaper.

Strikingly, the NYT also reports that the military is investigating whether “any artificial intelligence models, data crunching programs or other technical intelligence gathering means were to blame for the mistaken targeting of the school.”

Claude, the newspaper’s sources noted, works with the National Geospatial-Intelligence Agency’s Maven Smart System to “identify points of interest for military intelligence officers.”

Regardless of the outcome of the investigation, officials told the paper, it was ultimately “human error” to bomb the school, regardless of how the target was selected in the first place.

According to the NYT‘s own analysis of historical satellite imagery that dates back to 2013, it’s likely Central Command officials may have been working with outdated information. The imagery shows that the school was fenced off from the nearby military base between 2013 and 2016.

Despite the Trump administration officially labeling Anthropic’s AI chatbot as a “supply chain risk,” a move that sent shockwaves through the AI industry, the military has continued to rely on the chatbot during the offensive.

More on the strike: Pentagon Refuses to Say If AI Was Used to Select Elementary School as Bombing Target



Futurism

Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

More News

CATL Hong Kong Rally Drives Record Premium Over...
OpenAI Plans Desktop App Fusing Chat, Coding and...
Mother Sues OpenAI for Not Telling Police About...
Alibaba Shares Sink After Earnings Report

Business

‘Uncanny Valley’: Nvidia’s ‘Super Bowl of AI,’ Tesla Disappoints, and Meta’s VR Metaverse ‘Shutdown’
Google Shakes Up Its Browser Agent Team Amid OpenClaw Craze
A New Game Turns the H-1B Visa System Into a Surreal Simulation
Signal’s Creator Is Helping Encrypt Meta AI

Articles

The Best AI Tools of 2023: A Comprehensive Review for...
Gamifying AI: The Most Fun Apps That Harness Artificial Intelligence
Breaking Down Barriers: How AI Tools Are Making Technology Accessible
The Intersection of AI and Augmented Reality: Apps to Watch...

Tech Articles

Bridging the Gap: How Computer Vision is Making Technology More...
A New Era in AI: The Significance of Reinforcement Learning...
Practical Applications of Embeddings: From Recommendation Systems to Search Engines
The Legacy of Transformers: Generations of Fans and Fandom