Insights

TradeSecretsandGenerativeAI_SOCIAL(2300795)

Trade Secrets and Generative AI: Protective Measures In an Evolving Technological Landscape

In Short 

The Background: In recent months, artificial intelligence ("AI") platforms have taken the world by storm, introducing new, powerful tools for generating original and useful content based on training data and user prompts. 

The Situation: These tools pose a potential threat to a company's trade secrets, as an employee may inadvertently disclose sensitive information by using generative AI applications. This has led some companies to ban the use of these applications for work-related tasks.  

Looking Ahead: Although prohibiting the use of generative AI is one solution, there are several possible solutions to reasonably protect one's trade secrets while still taking advantage of generative AI's many benefits.

Generative AI applications such as large language models have emerged as groundbreaking tools for analyzing data and generating work product in all industries. As recent news has shown, however, those tools pose a unique threat to a company's trade secrets. These applications capture and store their inputs to train their models. Once captured, the information input into those applications sometimes cannot be deleted by the user, may be used by the application, and may be reviewed by the company behind the AI application. If an employee inputs a company's trade secret into an AI prompt, that trade secret could be at risk of losing its trade secret protection.  

To avoid the consequences of any disclosure arising from the use of an AI system, it is important that companies ensure that they take reasonable measures to protect their trade secrets. This Commentary analyzes potential measures to protect company trade secrets from employees' uses of generative AI applications. 

The Reasonable Measures of Protection Requirement for Trade Secrets 

Trade secrets are invaluable assets for businesses, encompassing proprietary information, formulas, processes, techniques, or customer data that provide a competitive advantage. Unlike patents or copyrights, trade secrets rely on confidentiality and are not formally registered. Maintaining the secrecy of trade secrets is essential for preserving a company's distinctiveness and competitive advantage in the market. 

Under the Defend Trade Secrets Act ("DTSA"), the owner of a trade secret must take "reasonable measures to keep such information secret." 18 U.S.C. § 1839 (3)(A). The Uniform Trade Secret Act and related state trade secret statutes have similar requirements. See, e.g., Uniform Trade Secrets Act, § 1(4)(ii) (requiring the trade secret to be "the subject of efforts that are reasonable under the circumstances to maintain its secrecy"); Cal. Civ. Code § 3426.1(d)(2) (same); Tex. Civ. Prac. & Rem. Code § 134A.002(6)(B) (requiring "the owner of the trade secret has taken reasonable measures under the circumstances to keep the information secret"). This requirement is critical because failure to take reasonable measures may result in that sensitive information losing its valuable trade secret status. 

The DTSA does not define "reasonable measures"; rather, whether those safeguards are reasonable will depend on the circumstances. The good news is that a party only needs to make "reasonable measures," not all conceivable measures. Thus, in some cases, courts have found that a party took reasonable measures to protect its trade secrets despite inadvertently disclosing them to a customer where the party disclosed the trade secrets as a result of a good-faith mistake and the party took immediate action to maintain the secrecy of the information upon learning of the mistake. See Fireworks Spectacular, Inc. v. Premier Pyrotechnics, Inc., 147 F. Supp. 2d 1057, 1066–67 (D. Kan. 2001); see also John Bean Tech. Corp. v. B GSE Group, LLC, 480 F. Supp. 3d 1274, 1296–99 (D. Utah 2020). 

Generative AI and the Potential for Unintentional Disclosure of Trade Secrets  

Generative AI applications have the enormous potential to increase productivity and create innovate solutions for companies across every industry. For example, in the software industry, there are an increasing number of applications that can parse natural and programming language inputs to generate or test source code. And in the life-sciences sector, AI applications can take amino acid sequences and predict protein structures. With continued innovation in the generative AI space, the potential for and use of such tools will only continue to grow.  

Generative AI applications have the ability to autonomously create original content by extrapolating information from a vast amount of data collected both from public sources and received inputs. That collected data is often retained on servers controlled by the company that supports the generative AI applications. However, this data collection process implicates various trade secret concerns for the companies that use these applications. Following reports of sensitive information being leaked to third parties after using generative AI platforms, many businesses have implemented complete bans and restrictions on the use of generative AI at work to protect their proprietary information. 

There are three primary concerns with an employee's input of company confidential or other sensitive information as a prompt into a generative AI application: (i) depending on the terms of the corresponding end-user license agreement ("EULA"), the company that supports the generative AI application can potentially review, release, or sell that sensitive information; (ii) the application itself can potentially reuse this sensitive information for third parties by training its responses with the sensitive information; and (iii) a third party may access the sensitive information if the company that supports the generative AI application has a security breach. Moreover, in the event of a disclosure, the employee-user cannot retrieve or delete the sensitive information input into the application and stored in the application's servers and cannot otherwise regulate the use or protection of the sensitive information once disclosed.  

Currently, more can be done to protect company trade secrets from disclosure by their employees. It has been reported that 70% of employees using generative AI tools do not report such use to their employer. This situation reflects that many companies have not yet implemented stringent policies in the wake of the generative AI boom. By implementing updated policies for protecting one's trade secrets, companies can better prepare themselves for the growing use of generative AI applications. 

Mitigation Strategies 

In addition to a company's standard policies for protecting its trade secrets, there are several solutions to further protect against the disclosure of trade secrets through the use of generative AI: 

Blanket Ban. As seen from some recent announcements from large multinationals, one solution to prevent the disclosure of trade secrets through generative AI is to prohibit the use of generative AI for work-related tasks altogether. One way to implement this solution is by preventing employees from downloading the software and from accessing web applications, which would stop most employees from using it. Another implementation is simply to instruct employees not to use the software, which is simpler to implement but less effective in preventing employees from using the software. Regular monitoring and audits can help detect and prevent potential violations. However, both cases require constant maintenance and policing for either to be effective. As the use of generative AI proliferates, this may become impractical. Additionally, companies that ban generative AI completely may be at a competitive disadvantage to companies that allow or encourage the use of generative AI because of its potential benefits.  

Robust Access Controls. An alternative to a blanket ban on generative AI is to limit the access and use of it. Companies should already have established protocols to limit access to sensitive data. Similarly, companies should consider establishing protocols to limit who can operate and interact with generative AI systems. In addition to limiting who has access, companies should also consider limiting or reviewing what can be used as inputs to the generative AI applications. For example, software could be used to prevent the use of certain key words or phrases from being used as inputs. As with a blanket ban, regular monitoring and audits can help detect and prevent potential violations as well. These considerations can also be informed by the corresponding EULAs. 

Enterprise Licenses. Companies that choose to allow the use of generative AI should consider obtaining an enterprise license that places restrictions on what the AI provider can do with prompts or other inputs to the system. As an example, an EULA for an individual user subscription might specify that the inputs can be used to train the underlying models for use by third parties. In contrast, enterprise licenses may provide that the inputs either cannot be used to train the underlying models or that such trained models can only be used by the company (to the exclusion of third parties).  

Third-Party Protection. In addition to the potential for an employee's use of generative AI, contractors and other third parties may use generative AI. Companies must review their existing contracts and consider whether to implement any of the above policies with regard to these third parties as well.  

Employee Education and Awareness. Finally, regardless of whether a company bans, limits, or even encourages the use of generative AI, raising awareness among employees about the importance of trade secret protection and the risks associated with generative AI is crucial. Courts have consistently found that companies have taken reasonable measures to protect their trade secrets by keeping updated employee agreements and policies. See, e.g., Philips North America LLC v. Hayes, 2020 WL 5407796, *9 (D. Md. 2020) (plaintiff plausibly alleged reasonable measures to protect its trade secrets based on reference to "Employee Ethics and Intellectual Property Agreement"); ExpertConnect, LLC v. Fowler, 2019 WL 3004161, *4 (S.D.N.Y. 2019) (plaintiff plausibly alleged reasonable measures to protect its trade secrets based in part on reference to employee handbook); Enterprise Leasing Co. v. Ehmke, 197 Ariz. 144, 151, 3 P.2d 1064, 1071 (Ariz. Ct. App. 1999) (finding that a company took reasonable measures to protect its trade secrets by limiting disclosures, including a confidentiality provision in its employment agreements with high-level managers, and including a confidentiality provision in the employee policy handbook). Thus, companies should update their employee handbooks, agreements, and policies to address the use of generative AI and conduct training programs to educate staff on handling sensitive information, emphasizing the legal and ethical obligations surrounding trade secrets.  

Conclusion 

The advent of generative AI brings immense opportunities but also poses some obstacles for protecting company confidential information and trade secrets. Whether companies choose to ban, limit, or allow the use of generative AI, they should implement robust security measures, establish clear policies, and foster a culture of awareness to mitigate the risks. By proactively addressing these challenges, businesses can safeguard their valuable intellectual property assets and maintain their competitive edge in the ever-evolving AI landscape.

Five Key Takeaways

  1. Although a ban on generative AI may be the strongest method of preventing disclosure of trade secrets, this solution may be costly to enforce and may result in a competitive disadvantage. 
  2. Robust limitations on access and inputs to generative AI applications protect sensitive information while taking advantage of generative AI's potential benefits to productivity and innovation.
  3. Companies should consider adopting enterprise versions of generative AI applications with EULAs that provide that any data collected is either protected or deleted. 
  4. Companies must also ensure that all contractors and third parties comply with their generative AI policies.
  5. Employee education and awareness are key to protecting one's sensitive information from being inadvertently or unintentionally disclosed using generative AI.
Insights by Jones Day should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information purposes only and may not be quoted or referred to in any other publication or proceeding without the prior written consent of the Firm, to be given or withheld at our discretion. To request permission to reprint or reuse any of our Insights, please use our “Contact Us” form, which can be found on our website at www.jonesday.com. This Insight is not intended to create, and neither publication nor receipt of it constitutes, an attorney-client relationship. The views set forth herein are the personal views of the authors and do not necessarily reflect those of the Firm.