OpenAI has accused Chinese AI startup DeepSeek of trying to replicate ChatGPT and other leading U.S. artificial intelligence models by using a technique known as “distillation,” according to a memo the company sent to U.S. lawmakers. The memo, seen by Reuters and described in published reports dated Feb. 12, says OpenAI believes DeepSeek is trying to use U.S. model outputs to train its own systems and gain an advantage.
The allegations were laid out in a memo sent to the U.S. House Select Committee on Strategic Competition between the United States and the Chinese Communist Party. OpenAI said it has observed “ongoing efforts to free-ride on the capabilities developed by OpenAI and other U.S. frontier labs,” referring to activities it attributed to DeepSeek.
What OpenAI alleged
In the memo, OpenAI described “distillation” as a method where an older, more capable AI model is used to judge the quality of answers from a newer model, which can transfer learning from the stronger system to the weaker one. OpenAI said it believes this approach is being used to replicate the behavior of U.S. models and use that information for training.
OpenAI also alleged that it has “observed accounts associated with DeepSeek employees developing methods to circumvent OpenAI’s access restrictions.” The memo said those methods included accessing models “through obfuscated third-party routers and other ways that mask their source.”
In addition, OpenAI wrote that it knows “DeepSeek employees developed code to access U.S. AI models and obtain outputs for distillation in programmatic ways.” The memo’s claims center on access and usage patterns OpenAI said it has observed, rather than detailing specific technical evidence in the published accounts.
DeepSeek response and broader concerns
DeepSeek and its parent company, High-Flyer, did not immediately respond to Reuters’ requests for comment, according to the reports. The accounts identify DeepSeek as Hangzhou-based.
The reports also say DeepSeek drew wide attention early last year after releasing a set of AI models that rivaled some top U.S. offerings, which helped fuel concerns in Washington about China catching up in the AI race despite restrictions. Those market concerns were tied to the perceived competitiveness of the models and the broader U.S.-China technology rivalry referenced by the House select committee.
OpenAI also raised safety-related criticism, saying Chinese large language models are “actively cutting corners when it comes to safely training and deploying new models.” That statement was presented as part of OpenAI’s broader argument to lawmakers about risks it associates with the competitive AI landscape.
Models named in the reports
The reports note that Silicon Valley executives have previously praised DeepSeek models called DeepSeek-V3 and DeepSeek-R1. Those models are described as being available globally.
The memo’s allegations arrive at a time when new model releases and rapid iteration have intensified scrutiny of how AI systems are trained and what data and methods are used to reach competitive performance. In that context, “distillation” has become a particularly sensitive issue because it can rely on outputs from more advanced systems in ways that companies may view as misuse.
What OpenAI says it is doing
OpenAI said it proactively removes users who appear to be attempting to distill its models to develop rivals’ models. The company’s memo to the House select committee frames the DeepSeek allegations as part of a wider concern about protecting the work of “U.S. frontier labs” and limiting what it sees as improper replication.
For now, the reports do not describe any public response from DeepSeek or High-Flyer addressing the specific claims in the memo. The issue is likely to keep drawing attention because it touches on competition, access controls, and the methods companies use to build high-performing AI systems.
