Baltimore Believe me, I know some stonecold ChatGPT lovers, who dove hard and use this version of artificial intelligence every day. In the last couple of days, two of them announced they had deleted it completely, after having depended on it deeply. I don’t know if there’s a worldwide or national boycott or what, but I also read in the daily papers that their user count has dropped, just as the count for Anthropic has risen. They can’t love the press they are getting either, as they have managed to brand themselves as the bad, dangerous, valueless AI as opposed to Anthropic as the good guys. There’s nothing like a good old-fashioned greed and glory story from Silicon Valley.
None of this particularly surprises me, even though I’d still be careful to really pick a side in this affair, both because it’s not simply a showdown between two AI gunfighters at the OK Corral, and because neither of them or the other contenders are necessarily wearing white hats and certainly not looking after us, rather than themselves and their businesses. Reading about Sam Altman over the years, especially after he was fired and rehired at OpenAI, caught in an endless dispute with Elon Musk, and then rejiggering OpenAI away from its nonprofit, protect against all harms beginning, it was clear that this was a guy on the make, and definitely not someone anybody should ever trust with the car keys, much less artificial intelligence. Then in recent months I read, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, by Karen Hao, who was embedded in the company at different times before all of the mess broke out and she became persona non grata. Altman comes off as a main chance, unaccountable schemer, unprincipled business guy mouthing platitudes about AI safety and protections. Not a good look, but confirmed many suspicions.
All of which hardened to concrete in the recent dispute with the Pentagon. The Defense Department was negotiating with Anthropic about the use of its Claude AI tools within the military. These tools had reportedly been deployed in the extraction of Venezuelan’s president. Reportedly Anthropic wanted to a voice in the use of its tools in mass warfare and wanted assurances that its tools would not be used for mass surveillance domestically. Defense Secretary Hegseth bridled at anyone connected with AI could ever have anything to say that might slowdown or impede his notion of American warfighters. Negotiations broke down over the issue and literally within hours there was an announcement that OpenAI had made a deal with the Pentagon, at the least dancing on a competitor’s grave, but at the most kowtowing to Pentagon demands rather than AI safety principles on the line in the sand that Anthropic had drawn. Musk and his AI weren’t far behind.
The public and industry pushback on OpenAI was immediate. Altman in less than 24-hours claimed their contract with the Pentagon was amended to include the language that Anthropic had lost the contract over. Without any credibility, Altman claimed in an all-hands meeting with his own staff rebelling that the “timing was unfortunate.”
People are voting on the credibility of AI with their abandonment of OpenAI’s ChatGPT while signing up for Anthropic’s Claude, Google’s Gemini, and other, in many ways, superior tools in protest of OpenAI’s subservience to the Trump government and military. None of these outfits are perfect. They are in competition, including who is in position to go public in the stock market first and finest. The stakes are huge, and without being sure that anyone, including and maybe especially the government will protect us, then people are doing the right thing by being selective about which AI tools to use. If that means a boycott of one now and a run to others, maybe that’s a message these techsters will finally understand and heed.
