Lorena O’Neil
View all posts by Lorena O’Neil March 18, 2026
A soldier of an Unmanned Aircraft System platoon of the U.S. Army watches as an Anduril Ghost-X helicopter surveillance drone lands. Sean Gallup/Getty Images For its 2026 budget, the U.S. Department of Defense requested $13.4 billion for autonomous weapons and systems, which includes unmanned and remotely-operated drones and weapons. Defense Secretary Pete Hegseth has made it clear that one of his top priorities is to accelerate the use of artificial intelligence on the battlefield. But experts are voicing concerns that he has done this while closing civilian harm mitigation centers, slashing oversight, cutting down on operational testing, and dismissing these kinds of guardrails as “risk-aversive bureaucracy.” A new “Business of Military AI” report from New York University Law School’s Brennan Center for Justice found that the military’s expanded use of AI, combined with loopholes in the rules governing how the military uses this new technology, have made regulation and oversight difficult, at a dangerously transformative moment for the future of warfare.
“I think that the law is not evolving as fast as the technology is, but that, in many ways, is a political choice,” says Amos Toh, senior counsel at the Brennan Center and one of the authors of the report. “There is this kind of broader demonization of guardrails as bureaucracy, right? But the problem with that framing is that guardrails actually also help make the technology safe and effective.”
Toh says that the rhetoric that guardrails are unnecessary and “woke” ignores the fact that they’re put in place not just to reduce excessive civilian harm and protect civil liberties, but to also make sure the technology is tested and deemed reliable. It is safer for people in the military if their AI weapons and tracking systems work as expected, especially on the battlefield: “There’s operational value in making sure this technology is appropriately tested, evaluated, and overseen.”
Steven Feldstein, a political scientist and senior fellow at the Carnegie Endowment for International Peace, agrees. “You have a Pentagon, under Hegseth’s leadership, that is increasingly dismantling accountability structures,” he says. “[It’s] at the very moment when you’re introducing a powerful new technology that has a lot of opacity when it comes to how information is processed and integrated into different decisions. To not have basic structures in place to scrutinize that, to test it, to validate their accuracy, should be raising alarm bells.”
Editor’s picks
The 250 Greatest Albums of the 21st Century So Far
The 100 Best TV Episodes of All Time
The 500 Greatest Albums of All Time
100 Best Movies of the 21st Century
DRONES HAVE BEEN USED IN THE MILITARY for decades, and have evolved since the first modern drone was launched in 1994. Ukraine’s reliance on drones in its war against Russia signaled a turning point in how drones have become the future of warfare, because it allows countries with fewer resources to make up for the gap with low-cost drones, explains Feldstein. Last summer, Hegseth pledged to increase the production of low-cost drones in the U.S., while Donald Trump Jr. and Eric Trump recently announced they are backing a new drone company to meet the Pentagon’s demand.
And the rapid evolution of AI is transforming how drone systems are used for navigation, communication, surveillance, and target identification.
The Brennan Center report found that the Department of Defense has been heavily investing in drone projects that could allow unmanned systems to communicate and strike targets, even if the machines lose touch with the humans controlling them. It found that two tech companies — Palantir and Anduril — have grown their shares of defense revenue faster than most others. “Records show how Palantir’s brand of data [integration] and analytics has become indispensable to the military’s pivot to data-centric warfare,” reads the report. “All branches of the armed forces have bought software from Palantir.”
In 2017, the military launched Project Maven, which began as a pilot program with Palantir, Google, and other tech companies to develop algorithms that sorted through satellite imagery and drone footage to identify targets. Maven now analyzes a wide variety of data sources with dedicated funding from the defense budget. Palantir also has an AI-based command and control program called Titan it is prototyping, which would analyze data to help the Army plan missions.
Related Content
Hegseth Demands Favorable Coverage As Iran Death Toll Increases
Jimmy Kimmel Hits Back at Trump’s Gas Price Claims: ‘He Really Is the Stupidest President’
Hegseth Goes to War Against Photographers Taking Bad Pictures of Him
America Is Winning the War It Isn’t Fighting in Iran
Last year was the most profitable for both Palantir and Anduril since they began doing business with the military, the report found, although it added that the five biggest defense contracts — companies like Lockheed Martin and Boeing — continue to dominate defense spending. Anduril partnered in 2018 with the Department of Homeland Security to build a virtual border wall through a network of surveillance towers, loaded with AI-features, at the U.S.-Mexico border. Now the company is also selling AI-powered drones and munitions to the military.
But the marriage between military and private tech companies has not been seamless. Last month, contract negotiations broke down when the U.S. government tried to pressure tech company Anthropic to change its original deal with the DoD and allow its Claude AI model to be used to develop autonomous weapons or conduct mass surveillance on Americans. Anthropic said it didn’t feel that its large language model was ready to do that, and when they wouldn’t budge on renegotiating the terms, Hegseth declared the company a supply chain risk to national security, prompting a lawsuit from Anthropic and multiple amicus briefs from fellow tech companies. (OpenAI announced a deal with the Pentagon in the wake of its turmoil with Anthropic.)
The truth is, the Anthropic drama was a rare glimpse into the relationship between the DOD and Silicon Valley, which proponents of AI safety have pointed out shows how little insight the American public gets into the world of defense tech. Because AI is new, the U.S. government is using a patchwork of laws, loopholes, and pilot programs to fund defense technology in a way that allows for little regulation, transparency, and testing. And the few safeguards that have been put into place are being ripped down.
In 2025, shortly after taking office, President Donald Trump rescinded Joe Biden’s 2023 executive order promoting safe, secure, and trustworthy AI. This had led to a memorandum in October 2024 outlining a framework for AI governance and risk management in national security. (Toh’s report criticized Biden’s rules as “inadequate to begin with” but pointed out that Trump has further weakened multiple safeguards.) Hegseth also gutted oversight offices that looked into limiting the risk of civilian harm, like the Civilian Protection Center for Excellence. Additionally, he halved the number of staff at the office which oversees testing of the military’s major weapons systems. Testing is necessary in order to make fixes in a controlled, laboratory setting, explains Toh, rather than on the battlefield — like when AI-powered drones used in the Ukraine War failed to work as promised and were susceptible to Russian GPS-jamming techniques.
“There are real reasons why you’d want to update how the military undertakes its missions; incorporating these technologies is important and something that other countries’ militaries are [already] doing,” says Feldstein. The answer isn’t not innovating at all, he says, but “to do so in a thoughtful, responsible way; You want to have the right types of accountability and transparency so that you’re hitting the legitimate targets as you should, and you are minimizing civilian collateral damage as is needed and as international law accounts for.”