Robots AtlasRobots Atlas
April 29, 2026 · 6 min readGoogleAnthropicPentagon

Google Signs Pentagon AI Deal, Filling the Gap Anthropic Refused to Fill

P
Pan RobocikApril 29, 2026 · 6 min read
Google Signs Pentagon AI Deal, Filling the Gap Anthropic Refused to Fill

Google has granted the U.S. Department of Defense access to its AI models on classified networks, according to reports published April 28, 2026. The move came one day after Anthropic lost its legal standoff with the Pentagon and was formally designated a national security supply-chain risk — the first American company to receive that label. With Google's agreement, the DoD has now secured frontier AI access from three major providers: OpenAI, xAI, and Google.

Key takeaways

  • Google agreed to allow the Pentagon to use its AI for "all lawful purposes" on classified networks, per reporting by The Information, The Wall Street Journal, and TechCrunch.
  • The contract includes language stating Google does not intend for its AI to be used in domestic mass surveillance or autonomous weapons; legal enforceability of those provisions remains unclear.
  • More than 950 Google and OpenAI employees signed an open letter urging CEO Sundar Pichai to reject the deal; Google did not respond to media requests for comment.
  • Anthropic became the first U.S. company ever designated a supply-chain risk by the Pentagon after refusing to grant unrestricted AI access; it is now challenging that designation in court.
  • OpenAI and xAI signed comparable agreements in the weeks prior to Google's deal.

The dispute that set the stage

The sequence began in late February 2026, when the Pentagon demanded unrestricted access to Anthropic's Claude — including for autonomous weapons systems without human oversight and for domestic mass surveillance. Anthropic CEO Dario Amodei refused, arguing that frontier AI models are not reliable enough for such high-stakes unsupervised decisions. The Department of Defense responded by formally branding Anthropic a "supply-chain risk," a designation normally applied to foreign adversaries, requiring any Pentagon partner to certify it does not use Anthropic's technology.

Anthropic contested the designation in federal court. A judge granted the company a preliminary injunction blocking the label while proceedings continue, as TechCrunch reported in late March 2026.

The same day Anthropic's standoff became public, OpenAI announced its own deal with the DoD, covering "all lawful purposes." CEO Sam Altman later acknowledged the agreement was "definitely rushed" and that "the optics don't look good." Shortly after, xAI concluded a similar arrangement.

Google joins as the third major provider

According to The Information, which broke the story, and subsequently confirmed by The Wall Street Journal and TechCrunch, Google agreed to allow Pentagon access to its AI on classified networks with broad permissions. Like the OpenAI agreement, Google's contract reportedly contains language stating it does not intend for AI to be used in domestic mass surveillance or autonomous weapons systems. The Wall Street Journal noted it is unclear whether such provisions are legally binding or enforceable.

Google moved forward despite explicit opposition within its own workforce. Over 950 current and former employees — verified through the open-letter platform notdivided.org — signed a letter calling on Sundar Pichai to reject any classified AI arrangement with the Defense Department without meaningful guardrails. According to Financial Express, signatories reportedly included more than 20 directors, senior managers, and vice presidents from across the company, including Google DeepMind and Cloud divisions.

"Today, we call on you, Sundar, to act according to the values on which this company was built, and refuse classified workloads," the letter concluded, as quoted by Financial Express. Google did not respond to requests for comment from multiple outlets.

From Project Maven to classified networks

The decision represents a significant departure from the position Google staked out in 2018, when thousands of employees protested the company's involvement in Project Maven — a Pentagon program using AI for drone footage analysis. That campaign led Google to withdraw from the contract and publish its own AI ethics principles for military applications.

In the years since, Google rebuilt its defense business steadily. The company was among the winners of a share of a multi-billion-dollar DoD contract in July 2025, alongside Anthropic, OpenAI, and Microsoft. Since December 2025, the Pentagon's unclassified generative AI platform, GenAI.mil, has been running on Google's Gemini models. The new arrangement extends that relationship into classified environments.

The enforceability question

A recurring issue across all three Pentagon AI agreements — OpenAI, xAI, and now Google — is whether ethical carve-outs in contracts provide meaningful protection. OpenAI has argued that restricting deployment to cloud API access prevents direct integration of its models into weapons systems or sensors. That claim has not been independently verified.

Legal observers and critics have noted that contract language saying a company "does not intend" its AI to be used in certain ways falls short of a prohibition. As security researcher Mike Masnick argued regarding the OpenAI agreement, provisions that reference compliance with Executive Order 12333 — the Reagan-era directive governing signals intelligence — may actually permit the types of surveillance the contracts ostensibly limit.

Why this matters

Google's agreement with the Pentagon closes a loop that opened when Anthropic refused. The underlying dynamic is now visible: a government agency used regulatory classification authority — the supply-chain risk designation — to coerce compliance from a private AI company, and the broader market responded by filling the gap. The episode establishes a precedent under which frontier AI access can be pressured through administrative labeling rather than legislation or formal procurement law.

The internal dissent at Google is also analytically significant. Unlike the 2018 Project Maven protest, which succeeded in changing company policy, the 2026 letter produced no visible effect on the decision. The gap between employee values and executive decisions in AI companies dealing with government clients appears to be widening, with practical enforcement of ethical commitments shifting from policy documents toward contract language — whose binding power remains untested.

What's next

  • Anthropic's lawsuit against the Pentagon supply-chain risk designation remains ongoing; the preliminary injunction is in place while the case proceeds. A ruling could define the legal limits of the DoD's authority over private AI vendors.
  • Google is reporting first-quarter 2026 earnings shortly after this news broke; the Pentagon deal is expected to be a subject of analyst and investor scrutiny.
  • The enforceability of ethical clauses in classified AI contracts has not been tested in court; it may become the subject of congressional oversight or future litigation.

Sources

Share this article