top of page
Search

Palantir demos show how the military could use AI chatbots to generate war plans

  • 2 hours ago
  • 1 min read


An ongoing and heated dispute between the Pentagon and Anthropic is raising new questions about how the startup’s technology is actually used inside the US military. In late February, Anthropic refused to grant the government unconditional access to its Claude AI models, insisting the systems should not be used for mass surveillance of Americans or fully autonomous weapons. 


The Pentagon responded by labeling Anthropic's products a “supply-chain risk,” prompting the startup to file two lawsuits this week alleging illegal retaliation by the Trump administration and seeking to overturn the designation.


The clash, along with the rapidly escalating war in Iran, has drawn attention to Anthropic’s partnership with the military contractor Palantir, which announced in November 2024 that it would integrate Claude into the software it sells to US intelligence and defense agencies. 


Palantir says the Claude integration can help analysts uncover “data-driven insights,” identify patterns, and support making “informed decisions in time-sensitive situations.”


Read the full story  |  WIRED




  • Twitter

© 2026 UnmissableAI

bottom of page