Professor Hrdy presented on trade secrecy and generative AI at the Philadelphia Intellectual Property Law Association (PIPLA) meeting held on Tuesday, Sept. 16, 2025 at Volpe Koeing. Professor Hrdy spoke to a group of practicing IP attorneys and PIPLA members about her research on trade secret law and artificial intelligence. Her article, called “Keeping ChatGPT a Trade Secret While Selling It Too,” is published in the Berkeley Technology Law Review and can be accessed here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4879849
Professor Hrdy addressed a legal puzzle that keeps trade secret owners (and their lawyers) up at night: How can companies protect valuable new generative AI technology using trade secret law, while also selling new AI products to the public? Professor Hrdy answered this question by examining trade secret law precedents involving traditional software technology, as well as new ongoing cases currently being pursued by AI companies.
Professor Hrdy addressed several interesting questions from audience members, including:
Q: “How should small businesses protect their AI-related trade secrets? Should they get patents or rely on trade secrecy?”
A: The standard factors that are considered seem to weigh in favor of trade secrecy for at least some aspects of AI inventions, such as algorithms, code, training data, model architecture, model weights, and system prompt code. The primary reason is the ease of maintaining secrecy. Additional factors include the potential for a longer term length, the undesirability of disclosure through the patent system (in as soon as 18 months), patent eligibility challenges, and the likely difficulty of enforcing any resulting patent given difficulty of detecting infringements and a narrow scope.
Q: “What is the implication of patent eligibility challenges for AI -related technology?”
A: It could, yes. The Patent & Trademark Office has not categorically excluded AI-inventions, but eligibility challenges raised by Section 101 “abstract idea” rejections as well as enablement and obviousness challenges does this shift the calculus towards trade secrecy versus patenting. Moreover, the Patent & Trademark Office’s negative patentability stance on non-human-inventions (e.g. invention purely developed by an AI with no human involvement) might make trade secrecy attractive. See Thaler v. Vidal (Fed. Cir. 2022) (“…the Patent Act requires an “inventor” to be a natural person…”) Trade secrecy, in contrast, has no human inventor requirement. So for purely AI-generated inventions, where patents (and copyrights) may not be available, trade secrecy provides an alternative.
Q: “Do courts consider whether AI companies who distribute generative AI models to the public have failed to take reasonable measures to protect their secrets?”
A: Yes, they do and will likely do so for upcoming AI cases, such as the OpenEvidence cases I have written about. As I’ve noted, information that can easily be extracted from an AI through simple prompting, without much time, cost, of effort, is quite arguably “readily ascertainable through proper means” and is not the subject of “reasonable” secrecy precautions. That said, courts give great deference to attempts to keep information factually secret (e.g. compiling code to make it hard to figure out, requiring all users to log in with passwords, and adopting other cybersecurity measures). Courts also give great deference to contractual measures, such as requiring all users to adhere to “terms of use” that restrict what users can do with the underlying technology. As I’ve discussed, these terms of use may generate obligations of confidentiality, restrict reverse engineering, prohibit using automated methods to extract data, or even contain noncompete clauses. On the other hand, some courts have held in the software context that releasing software features that are plainly visible to a user without a confidentiality provision constitutes failure to take reasonable measures and forfeits trade secrecy.
Q: “If I give an AI tool like ChatGPT my own trade secrets, could OpenAI adopt this information as their own trade secret?”
A: Ideally, no, but it could be very hard to detect this. Under 18 U.S.C. § 1839(4), the “owner” of a trade secret is defined as “the person or entity in whom or in which rightful legal or equitable title to, or license in, the trade secret is reposed.” Although trade secret law has no “originality” requirement (like in copyright, where you cannot claim copyright in information you derived from another), only licensees or those with “rightful” title can be owners of trade secrets. At least in theory, a trade secret that is taken without authorization from another should not be deemed rightfully owned by the taker. That said, the general terms of use for ChatGPT users does not promise users confidentiality. As I’ve discussed, there is a separate terms of use governing an Enterprise License, which does contain mutual confidentiality protections, stating that OpenAI promises to “use Discloser’s Confidential Information to exercise its rights and fulfill its obligations under this Agreement” and “take reasonable measures to protect the Confidential Information” and to “not disclose the Confidential Information to any third party except as expressly permitted in this Agreement.” However, there is no longer any no such provision in the general terms of use that applies to ordinary users. Users can in theory “opt out” of having a model train on their information. The general terms of use has an “Opt out” clause, stating: “If you do not want us to use your Content to train our models, you can opt out by following the instructions in this article. Please note that in some cases this may limit the ability of our Services to better address your specific use case.”…