
The Pentagon has ordered that all Anthropic AI tools be removed from military systems after the company was designated a supply chain risk.
A March 6 memo instructed officials to remove all Anthropic AI products within 180 days, according to CBS News. The revelation comes amid increasing tension between the military and Anthropic and an announced partnership between OpenAI and the government.
Anthropic was deemed a supply chain risk in a March 4 letter, potentially impacting the company's relationship with military contractors. "It (Designation) plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts," Anthropic wrote in a March 5 statement.
Anthropic has sued the military over the supply chain designation, arguing that it violates the company's First Amendment rights and is a retaliatory action, Business Insider reported. The website reported that Anthropic's ongoing dispute with the military could cost it billions.
Previously, Anthropic has said that the issue it had with the military related to guardrails to prevent the use of its technology in domestic surveillance and autonomous weapons.
"The Department of War has stated they will only contract with AI companies who accede to 'any lawful use' and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards," Anthropic wrote in a statement February 26.
According to the memo cited by CBS, the military is now following through with the threat and removing Anthropic from all systems. In the memo, Defense Department Chief Information Officer Kirsten Davies wrote that adversaries "can exploit vulnerabilities" and could pose "potential catastrophic risks to the warfighter."
OpenAI announced an agreement with the Department of War on February 28 and updated that announcement on March 2. In those statements, OpenAI said that it too would have guardrails regarding mass domestic surveillance and the development of autonomous weapons.
The agreement does specifically mention the "lawful purpose" standard that Anthropic referenced as a sticking point in its relationship. OpenAI wrote that their agreement stated that "The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols."
Originally published on IBTimes
© Copyright IBTimes 2024. All rights reserved.








