The same companies that have previously seemed to champion safety and ethics in their mission statements are now actively selling and developing new technology for military applications.
In 2024, OpenAI removed a ban on “military and warfare” use cases from its terms of service. Since then, the company has signed a deal with autonomous weapons maker Anduril and, this past June, signed a $200 million Department of Defense contract.
OpenAI is not alone. Anthropic, which has a reputation as one of the most safety-oriented AI labs, has partnered with Palantir to allow its models to be used for US defense and intelligence purposes, and it also landed its own $200 million DoD contract.
Read more | THE VERGE