OpenAI scrambles to limit damage from US military deal

Panic stations as ChatGPT uninstalls surge 295%

Sam Altman is trying to limit the backlash against OpenAI which has seen a surge in ChatGPT app uninstalls after announcing a partnership with the US administration. Concerns over ethics, surveillance, and AI weaponry remain unassuaged.

OpenAI has swung into damage limitation mode with CEO Sam Altman saying yesterday in an internal memo which was subsequently shared on X, that the deal “looked opportunistic and sloppy.”

"We shouldn't have rushed to get this out on Friday," Altman wrote in an X post on Monday. "The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy."

OpenAI announced that it was happy to step into the gap created by the Anthropic split with the rebranded US Department of War (DOW) headed by former Fox News talking head Pete Hegseth.

The split was triggered by the use of Anthropic’s Claude by the US military during the extraction of the Venezuelan president Nicolás Maduro, in January. Anthropic referred the DOW to its terms of use, which prohibit the use of Claude for violence, weapons development or surveillance.

Hegseth demanded full and unrestricted access to all Anthropic AI models for ‘every lawful purpose’ and provided a deadline of the end of last week for Anthropic to comply, but Anthropic refused.

The ensuing online temper tantrum from Hegseth, accused Anthropic of “arrogance and betrayal.” Hegseth said that Anthropic would continue to provide services “for a period of no more than six months to allow for a seamless transition to a better and more patriotic service”.

Just hours before the US and Israel began attacking Iran, Trump ordered all federal agencies to stop using Claude and said it was a “Radical Left AI company run by people who have no idea what the real World is all about”.

The Wall Street Journal reported that Claude was used in the attacks anyway, which illustrates the extent to which it is embedded into operations and that somebody within the US administration was keen to let that information be reported.

ChatGPT uninstalls leap 295%

The announcement that OpenAI was partnering with the DOW was greeted unenthusiastically by ChatGPT’s consumer users. US app uninstalls of ChatGPT’s mobile app jumped 295% on Saturday, February 28. For comparison the usual rate is around 9%.

At the same time. downloads of Claude increased by 37% on Friday and 51% on Saturday, and at the time of writing remains top of the free app download rankings in Apple Appstore.

Altman’s original announcement claimed to have more safeguards than the Anthropic agreement, but the contract extracts shared by Altman revealed that it still allowed for mass surveillance and AI-controlled weaponry provided it was “legal” – a term that is open to interpretation by a US administration keen to avoid being hobbled by what Hegseth referred to yesterday as “stupid rules of engagement.“

Altman’s attempts to try and limit the fallout from what he himself called his “opportunistic” looking deal haven’t really worked.

He claimed in his new post that the company had added new language to the contract to address the use of ChatGPT for domestic surveillance, but the amendments OpenAI has shared continue to rely upon the malleable concept of legality as the restrainer on preventing mass surveillance. The publicly shared extracts also do not address the issue of autonomous weapons.

"Consistent with applicable laws... the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals," the new sections read. "For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information."

The words “intentionally” and “deliberate” look to many like legalese for “loophole,” with some social media users pointing out that in an autonomous system, intentionality isn’t a safeguard.

In yesterday’s post Altman again recused himself from any responsibility to think about the ethical dilemmas that the technology he developed has created, saying that:

"It should be the government making the key decisions about society. We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty. “

Which of course poses an obvious question: liberty for whom?