The Pentagon is putting real pressure on major artificial intelligence companies to give the U.S. military access to their tools inside classified systems.
Officials arenβt just asking for basic access. They want these AI models to work without all the usual limits companies place on users.
During a White House meeting on Tuesday, Emil Michael, the Pentagonβs Chief Technology Officer, told tech leaders the military wants these AI models running across both classified and unclassified networks.
An official close to the talks allegedly said the government is now set on getting what it calls βfrontier AI capabilitiesβ into every level of military use.
Pentagon demands access without restrictions across secure networks
This push is part of bigger talks about how AI will be used in future combat. Wars are already being shaped by drone swarms, robots, and nonstop cyberattacks. The Pentagon doesnβt want to play catch-up while the tech world draws lines around whatβs allowed.
Right now, most companies working with the military are offering watered-down versions of their models. These only run on open, unclassified systems used for admin work. Anthropic is the one exception.
Claude, its chatbot, can be used in some classified settings, but only through third-party platforms. Even then, government users still have to follow Anthropicβs rules.
What the Pentagon wants is direct access inside highly sensitive classified networks. These systems are used for stuff like planning missions or locking in targets. Itβs not clear when or how chatbots like Claude or ChatGPT would be installed on those networks, but thatβs the goal.
Officials believe AI can help process huge amounts of data and feed that to decision-makers fast. But if those tools generate false info, and they do, people could die. Researchers have warned about exactly that.
OpenAI made a deal with the Pentagon this week. ChatGPT will now be used on an unclassified network called genai.mil. That network already reaches over 3 million employees across the Defense Department.
As part of the deal, OpenAI removed a lot of its normal usage limits. There are still some guardrails in place, but the Pentagon got most of what it wanted.
A company spokesperson said any expansion to classified use would need a new deal. Google and Elon Muskβs xAI have done similar deals in the past.
AI researchers are quitting and calling out the risks
Talks with Anthropic havenβt been as easy. Leaders at the company told the Pentagon they donβt want their tech used for automatic targeting or spying on people inside the U.S.
Even though Claude is being used already in some national security missions, the companyβs executives are pushing back. In a statement, a spokesperson said:-
βAnthropic is committed to protecting Americaβs lead in AI and helping the U.S. government counter foreign threats by giving our warfighters access to the most advanced AI capabilities.β
They said Claude is already in use, and the company is still working closely with whatβs now called the Department of War. President Donald Trump recently ordered the Defense Department to adopt that name, but Congress still needs to approve it.
While all of this is happening, a bunch of researchers at these companies are walking out. One of Anthropicβs top safeguards researchers said, βThe world is in peril,β as he quit. A researcher at OpenAI also left, saying the tech has βa potential for manipulating users in ways we donβt have the tools to understand, let alone prevent.β
Some of the people leaving arenβt doing it quietly. Theyβre warning that things are moving too fast and the risks are being ignored. ZoΓ« Hitzig, who worked at OpenAI for two years, quit this week.
In an essay, she said she had βdeep reservationsβ about how the company is planning to bring in ads. She also said ChatGPT stores peopleβs private data, things like βmedical fears, their relationship problems, their beliefs about God and the afterlife.β
She said thatβs a huge problem because people trust the chatbot and donβt think it has any hidden motives.
Around the same time, tech site Platformer reported that OpenAI got rid of its mission alignment team. That group was set up in 2024 to make sure the companyβs goal of building AI that helps all of humanity actually meant something.
The smartest crypto minds already read our newsletter. Want in? Join them.



















English (US)