Testifying in a California federal court, Elon Musk confirmed that his AI company xAI used distillation techniques on OpenAI's publicly accessible models to train Grok — while simultaneously suing OpenAI for betraying its founding principles.
Key takeaways:
- Musk testified that xAI applied distillation to OpenAI models during Grok's training; when pressed, he confirmed "Partly"
- He framed it as "general practice" across the AI industry — a normalization attempt that still constitutes an admission
- In the same session, Musk ranked Anthropic first among AI companies, followed by OpenAI, Google, and Chinese open source; he described xAI as "a much smaller company with just a few hundred employees"
- Musk is the plaintiff in this case — he is suing OpenAI for allegedly abandoning its nonprofit mission
What Distillation Actually Means
Distillation, in the context of large language models, is the process of systematically querying a publicly accessible model to generate training data for a new "student" model. The practice became a major flashpoint in December 2024 when DeepSeek released its R1 model with documentation acknowledging the use of distillation from leading American frontier models, including OpenAI's. The revelation triggered alarm among U.S. AI labs: if a competitor can extract the capability of a cutting-edge model through inexpensive systematic querying, the scale advantages of billion-dollar compute investments become partially nullified.
The Testimony
On April 30, 2026, under questioning by OpenAI's attorneys, Musk was asked directly whether xAI had used distillation on OpenAI's models to train Grok. His first response was to characterize it as "a general practice among AI companies." Pressed on whether this meant xAI had done so, he answered: "Partly."
Musk's lawsuit against OpenAI rests partly on the argument that the organization broke faith with its founders and investors by departing from the principles under which it was originally built. Musk's admission that his own company extracted value from OpenAI's models — without explicit authorization — inverts the moral architecture of his case.
The Ranking
Musk was confronted with a prior claim that xAI would soon be "far beyond any company except Google." He offered a spontaneous ranking: Anthropic in first place, followed by OpenAI, Google, and Chinese open source models. He placed xAI significantly below all of them, describing it as "a much smaller company with just a few hundred employees." xAI raised a $6 billion Series B in 2024 at a reported valuation of approximately $50 billion.
The Legal and Industry Context
OpenAI recently reached a settlement with Microsoft resolving a separate dispute over the terms of Microsoft's original investment. The Frontier Model Forum — a coalition that includes OpenAI, Anthropic, and Google — has launched an initiative to detect and block distillation attempts from foreign actors, particularly in China. The irony of this initiative is now documented in court transcripts.
Distillation does not involve copying model weights or code — it involves training on the outputs of inference calls. Whether this constitutes a copyright violation, a terms-of-service breach, or an act of unfair competition depends on jurisdiction and interpretation. No court has issued a definitive ruling on the question.
Why This Matters
What makes Musk's testimony significant is not the practice itself — it was widely assumed to be occurring across the industry — but the conversion of an industry assumption into sworn testimony. For Musk personally, his narrative as the defender of principled AI development is now harder to sustain in court. For the industry: if courts or regulators move to formalize the rules around distillation, they will be doing so in an environment where the leading AI companies — OpenAI, Anthropic, Google — are simultaneously its practitioners and its critics.
The deeper issue is what the testimony reveals about competitive dynamics. When even well-funded companies like xAI need to bootstrap capability from competitors' models, the moat created by training data and compute is less durable than balance sheets suggest.
What's Next?
- The Musk v. OpenAI trial continues; further testimony may reveal the scale and methodology of xAI's distillation practices
- Frontier Model Forum's distillation detection initiative will likely face new scrutiny given Musk's admission
- Legislative interest in regulating model distillation may intensify in the U.S. and EU
Sources
TechCrunch — Elon Musk testifies that xAI trained Grok on OpenAI models





