On February 27th, I listened to Ezra Klein’s podcast with Jack Clark, Anthropic’s co-founder. While he had the whole “C-Suite talk” about his company and the future innovation their specific AI will bring, one part truly showed something human from Jack. When asked about his contract negotiations with the Department of Defense, he said he could not discuss this at all. For a moment, he shut down, had a tone shift, and had to discuss something else. Afterward, they began discussing how AI security is something that needs to be heavily stressed in the next few months and years ahead.
After listening to the podcast, I was able to see the news, the Truth Social from our president, and responses by every party. This felt like binging a Netflix series the week the finale is coming out. It also felt a bit confusing. Anthropic has two demands and if you haven’t seen them here, they are:
1) The AI cannot be used for autonomous attacks without human supervision and approval
2) The AI cannot be used for mass domestic surveillance
Now coming from an American citizen this honestly sounds rather refreshing. A company, let alone an AI company, is looking out and staying firm about making sure our rights as citizens are being upheld. But the longer I think about this, the more confused I get. Shouldn’t it be the government making sure this is regulated, ready, and useful for national security? An American company that has already cut ties with CCP-related threats, helped stop cyberattacks, and is already embedded within the Department of Defense should not be threatened as a security risk.
Now I can always see an argument that national security is important. But why do these clauses cause the government to mark Anthropic as a supply chain risk? Being a supply chain risk would prevent them from doing business with the government, and any subcontracts on work they are doing with the government. Anthropic can afford to lose the money from government contracts, but to prevent them from doing business with other American companies would cripple them. This feels as though they are being forced to either fall in line or lose to their competitors. From what should be a conservative government, who should try to have less involvement with regulation of what private businesses do, this seems to counteract this philosophy. If this was just cutting ties with Anthropic from military operations and future contracts over these clauses, I would find that a reasonable step forward to take. But making them the first American company in history that is a supply chain risk seems like an overreaction.
The best argument I have heard against what Anthropic is doing is related to another dangerous technology, nuclear weapons. If a private company developed a nuclear bomb, wouldn’t you want the government to step in and make sure it was being used properly? I can confidently answer yes. I would want a government I fully trusted to step in and take control. Currently I am lacking that level of confidence. This current administration has a ‘do first, solve in court later’ philosophy. On top of that I would equate the technological rise of AI more closely to a nuclear power plant rather than a bomb. If the nuclear power plant told my government that they would allow them to take all the research and allow the tools to be properly used to form a bomb, under a condition that no bomb will be dropped in American cities, that would be reasonable. Now imagine the government couldn’t make that promise and threatened to shut the plant down while actively taking their research.
This example is extreme, and not a one-to-one comparison by any means. But AI will be the next frontier in military technology with drones, intelligence, and cyberattacks. The United States of America should make it a priority to be the leader in this field, but the current way they are going is extremely unlikable. Understandably, you do not want a private company CEO to be giving the go ahead when a bomb could reach home soil in minutes. Now this case wouldn’t have been an issue with the stated clauses, but the involvement of private companies in military action is the second most common reason I have heard for the US Government’s case to exile Anthropic. Every step of the way I am looking at a reasonable boundary being placed, and somehow, they are labeled as “woke” and a national security threat.
The only party that is more unlikable in this case is OpenAI. They seemingly spoke out for Anthropic’s values at first, as the message was reasonable. But have now signed a contract with the Department of Defense. There were two alarm bells ringing when I saw this. First, their current financial obligations to revenue are unsustainable from a business perspective. Now I am not here to pitch an AI bubble, there will be winners and losers, and I feel as though ChatGPT lost the major benefit they had going for them. OpenAI had ease and publicity. My older coworkers would refer to all AIs as “Chats”. They held the #1 spot on the App Store and people have been using them since they had been the first one to become mainstream. While Claude produces the best code, Gemini has some of the best training and funding you could ever desire from a business perspective, ChatGPT for the average American was as Tupperware is for plastic storage containers.
I have seen people unsubscribe from ChatGPT in droves since signing the contract, I am one of them. Their dire situation is getting worse. But what happens when they are so heavily entrenched in the US Government’s defense system if they cannot pay for expansion, training, or even keep the doors open? Will my tax dollars fund another bailout, will the US Government run an even bigger deficit to keep them afloat? From an outside perspective this seems almost as desperate as ads, which OpenAI in an interview several months prior said it would be a last resort to keep the business running. This brings me to my second point; they seem to only do what is in their best interest. Now understandably a business will always try and survive, but it seems as though OpenAI is straying further away from their core principles, and getting quicker to take the escape routes laid in front of them. After several years they are no longer only a non-profit, after several months they are running ads, and not even several days later they are signing this defense contract.
Seeing Anthropic stand up for the people makes me truly believe their first core value of “01 Act for the global good.” I haven’t used much of their services, but I will start now. I ask that anyone reading this investigate the exact details of contract wording and make a choice with their wallet based on what they truly believe is right. As a single person and consumer that is all I can do besides speaking out and causing discussion with others. If anyone has anything to add, change, or insight I got wrong I want to hear it. My core message is always to learn, grow, and get involved when possible.
