Professor Camilla Hrdy, who joins the Rutgers Law faculty as an Associate Professor of Law in the fall, has just published a new article on trade secrecy, contracts, and generative AI. The article will be published in the Berkeley Technology Law Review. Professor Hrdy argues that developers of “closed-source” generative AI products, such as ChatGPT, will try to prevent attempts to learn how they work through a combination of trade secret law and contract law. This patchwork of legal protection will have implications both for private actors and for governments, as they attempt to gain transparency into AI. Prof. Hrdy shows that companies like OpenAI require users to agree to robust “terms of use” that limit users’ ability to reverse engineer the models, and even contain noncompete provisions. Some precedents established for traditional software suggest that violating one of these terms might be considered not just breach of contract, but also trade secret misappropriation, leading to harsh remedies and extending liability even to third parties outside of the contract. Prof. Hrdy argues that one solution to this problem is contained in the Defend Trade Secrets Act (DTSA) of 2016. The DTSA has provisions that make it clear that reverse engineering is legal under federal law. A state law that turns lawful reverse engineering into illegal trade secret misappropriation is preempted by federal trade secret law pursuant to the Supremacy Clause of the Constitution. The upshot, Hrdy argues, is that reverse engineering a publicly-distributed generative AI model cannot be construed as trade secret misappropriation, regardless of the presence of a boilerplate anti-reverse-engineering or noncompete clause.