As the Anthropic-Pentagon crisis unfolded on February 27 and March 2, 2026, something remarkable happened within two of Anthropic's biggest rivals.
More than 300 Google employees and over 60 OpenAI employees signed an open letter calling on their employers to stand with Anthropic and refuse to allow the US military unrestricted access to AI technology.
This is the largest joint employee action between two competing technology companies in the AI industry.

AI Ethics War: What Did the Employee Open Letter Actually Say?
The letter was direct and clear. It said that the US government was trying to divide AI companies by making each one fear that the other would give in to the military's demands and win the resulting government contracts.
The letter told leaders at Google and OpenAI to put aside their competitive differences and stand together.
It specifically said they should uphold the same boundaries that Anthropic had tried to defend — no use of AI for autonomous weapons and no use of AI for mass domestic surveillance of civilians.
Jeff Dean, the Chief Scientist of Google DeepMind and one of the most respected AI researchers in the world, posted publicly on X to support the sentiment.
He wrote that mass surveillance violates the Fourth Amendment of the US Constitution and has a chilling effect on freedom of expression.
Although Dean said he was speaking as an individual and not on behalf of Google, his public statement was seen as a significant signal of where some of the most senior AI researchers stand on this issue.
Also read about: OpenAI Advertising Rolls Out as ChatGPT Adds Ads
What Does This Tell Us About the Future of AI Companies?
What makes this event so significant is what it reveals about who really makes decisions within AI companies. These are not junior employees writing a petition.
Many of the letter's signatories are senior engineers and researchers who built the very models their companies are being asked to license to the government.
When the people who create these tools say publicly that certain uses are unacceptable, that creates a real problem for leadership teams that want to take government contracts without limits.
The broader implication for the AI industry is that the era of purely business-driven AI development is ending.
AI researchers increasingly see themselves not just as technology builders but as people responsible for the social consequences of what they create.
In 2026, the best AI talent in the world is increasingly choosing employers based partly on whether those employers share their values about how AI should and should not be used.
Companies that want to attract top AI researchers will need to have clear, credible positions on military use, surveillance, and autonomous weapons — not just profit targets.
Also Read: