Insights

Generative AI EndUser License Agreements_SOCIAL

Generative AI End-User License Agreements: What Users Need to Know

In Short 

The Background: The prevalence of generative artificial intelligence ("GenAI") is rapidly expanding, providing vast opportunities for efficiency and innovation, while also creating new risks. 

The Result: Users of GenAI systems should be aware that the end-user license agreements ("EULAs") governing these GenAI platforms may vary significantly in terms of their treatment of user prompts, output rights and ownership, data privacy, compliance and liability, confidentiality, and output use restrictions. 

Looking Ahead: Corporate and individual users of GenAI tools will need to monitor the EULAs to fully understand their rights and to develop internal governance procedures covering GenAI to mitigate potential business and legal risks.

While AI has been a hot topic for years, recently conversational GenAI tools have become readily accessible and are being rapidly adopted by individuals. As businesses and individual end users become increasingly exposed to the capabilities of these technologies, a key challenge has been how to balance their potential benefits against the potential legal uncertainty and risks from their use. Most of these legal issues have yet to be resolved, and are already the subject of litigation, regulatory proposals, and policy discussions. Users of GenAI tools should be aware of the provisions in the applicable EULAs to better understand their rights and potential risks.  

GenAI is a subset of AI that generates new content output in response to a user's input. Traditional AI processes data, detects patterns, and uses predictive models to make decisions or provide a rules-based response. GenAI, on the other hand, receives prompts and uses its training data and models, such as generative adversarial networks and large language models, to transform user prompts and ultimately create a wide array of new content output, such as text, code, images, videos, audio, and simulations. 

GenAI's ability to create new content from user prompts presents serious questions. May user prompts be used by the creator of the GenAI tool to improve the tool and its underlying training models? Who can claim ownership over the output­­ and user prompts—the GenAI provider, the user, the end user, or neither? Who bears liability for harm caused by GenAI outputs? While the applicable EULAs may shed some light on some of these questions, they are likely not dispositive of key issues.  

For example, a EULA may provide that, as between the user licensee and the owner/developer of the GenAI tool, the output is owned by the user—but that does not resolve whether a particular output can actually be owned by the user or whether that output infringes on the rights of another, including the individual end user. EULAs can vary greatly depending on the platform and whether it is accessed through a free, freemium, or enterprise version. Moreover, as technology evolves and new versions of these tools are released, the terms and conditions, including privacy practices, governing their use may also change.

Considerations Related to User Prompts 

One of the most exciting aspects of GenAI is that the models can improve over time; however, this leads to various issues relating to the usage and ownership of inputs (i.e., user prompts). Various free and paid GenAI tools explicitly state that inputs may be used to train the platform's creation model. If a user inputs company-sensitive or proprietary information, that information is thus available to train the tool, making the information available for use by the creators of the tool itself and potentially others as learning data. 

Cohere, one of many platforms that provide free trials of their platform, just raised $270 million in a Series C round to grow its enterprise version that allows enterprises to utilize GenAI on the cloud platform of their choice while keeping data secure. Cohere's current Terms of Use requires that the user grant Cohere "a nonexclusive, worldwide, royalty-free, irrevocable, sublicensable, and fully paid-up right" to user prompts for a variety of purposes, including sharing them with third parties. 

These EULAs may also include clauses that allow users to opt out of using their inputs as training data, or limit the use to a model that is specifically trained for a particular customer, while others may prohibit the use of inputs as training data all together. For example, Anthropic's Terms of Service for its pay-as-you-go Claude AI allows for Anthropic to use results and inputs "to provide, maintain, and improve the Services and to develop other products and services." Some providers with paid enterprise models (i.e., private deployments) accept restrictions on the use of inputs as learning data, while their free public versions often do not offer the same protections. Some open, non-API, models require users to fill out forms to disable use of inputs for training.  

Even if inputs are not used for training, the confidentiality and security of inputted information may be at risk. Depending on the EULA, a GenAI company may potentially review, release, or sell the information and a third party may access the information if the GenAI platform experiences a security breach. This may lead to unintentional disclosures of trade secrets, loss of legal privilege, and potential privacy compliance issues under applicable privacy laws, such as the GDPR if the user prompts contain any personal information or data. This is especially relevant in the United States as 11 states have passed comprehensive privacy laws that will become effective over the next two years. Regulators, such as the Federal Trade Commission, and data protection authorities have already started to address a range of data privacy, protection, and competition issues related to GenAI. 

Some platforms, like Adobe and Microsoft's Bing, include provisions that prohibit users from inputting confidential or private information to its GenAI platform. However, not all GenAI platforms block user inputs that include confidential information—and in any event, some information would not be readily identifiable as confidential. Furthermore, some EULAs contain provisions that explicitly state that the provider does not guarantee the confidentiality or security of data used in connection with the GenAI service. 

Many GenAI EULAs contain provisions establishing that the user is solely responsible for their inputs, including that the input not violate applicable laws or the rights of a third party. Some EULAs also contain provisions that the GenAI provider is under no obligation to review the accuracy or potential liability of inputs, and prohibit the user from inputting information that violates the IP rights of others. For instance, Anthropic's terms require the user to represent and warrant that the inputs will not violate any third party IP or data privacy laws, but permit the user to retain all rights, titles, and interests to the inputs. Retaining the IP rights to user inputs may become increasingly important, especially as the process and finesse of user prompt creation has led to the emerging field and profession of "prompt engineering," and companies may derive competitive advantages from superior user prompt engineering. Companies contracting with GenAI platforms may be able to negotiate the rights and protections related to inputs. 

Considerations Related to Generated Outputs 

Regarding output ownership, many GenAI EULAs contain provisions that either disclaim or assign to the user all rights, titles, and interests to outputs created by the system; however, this does not guarantee that the user has IP rights to the output. Moreover, enforcing any IP rights may face obstacles in relation to current IP laws, particularly in the United States.  

Current U.S. patent and copyright law regarding ownership of AI-generated outputs remain unclear. Recently, the U.S. Supreme Court denied certiorari in Thaler v. Vidal. Therefore, the Federal Circuit's ruling that only human beings, and not AI systems, may qualify as an inventor under U.S. patent law remains. Further, the U.S. Copyright Office issued guidance stating that AI-generated material is not protectable when a human solely provided a prompt that resulted in the generated content. The degree to which AI can "assist" a human being in connection with an invention or work of expression is largely unresolved, and will likely be the subject of continuing litigation and legislative debate. 

Most EULAs include representations that different users may receive the same or substantially similar outputs in response to their inputs. For example, Adobe's GenAI terms explicitly state that "[t]he output may not be unique and other users of generative AI features may generate the same or similar output. The Output might not be protectable by Intellectual Property Rights." Users should be aware that certain GenAI providers may require the users to license their rights, if any, in the output to the provider for further exploitation, including as training data. For example, the Bing Terms of Use state that by using the GenAI services, the user is "granting Microsoft, its affiliated companies and third party partners permission to use the Captions, Prompts, Creations, and related content in connection with the operation of its businesses (including, without limitation, all Microsoft Services), including, without limitation, the license rights to: copy, distribute, transmit, publicly display, publicly perform, reproduce, edit, translate and reformat the Captions, Prompts, Creations, and other content you provide; and the right to sublicense such rights to any supplier" of the GenAI services. 

Certain providers, on the other hand, may assign users the rights and/or disclaim ownership of the output, thereby asserting that the user is liable for the output content. In such situations, liability could arise if the output content is violating any IP or privacy rights of others—a risk that the user may be unwilling to take on, particularly given that the user generally will not know how the output was actually created by the underlying AI system. Users should also closely review any outputs as some providers, including Adobe and Microsoft, disclaim any warranties regarding outputs and "any implied warranties that the output will not violate the rights of a third party or applicable law…." Some software code-generating GenAI platforms may output code generated from open-source training data. Therefore, users must take steps to ensure they are managing the risks of using GenAI in software development and complying with open-source software licenses obligations to avoid disputes, including litigation. 

Even if a provider's EULA contains provisions that assign the rights of the output to the user, some terms restrict the use of such outputs. For example, some providers restrict the commercial use of outputs, and other providers restrict only the commercial use of outputs generated in free versions. There are also providers that place no commercial restrictions on outputs regardless of whether the output was created by a free or paid plan. Therefore, it is crucial for GenAI users to review the applicable EULA to fully understand any restrictions or limits on the use of outputs.  

The risks associated with potential misuse of GenAI tools are vast. GenAI systems may hallucinate, provide inaccurate information, or generate harmful outputs. Many EULAs state that GenAI providers do not represent or warrant that outputs are accurate, and providers limit or disclaim liability for inaccuracies or any damages caused by their services. Most EULAs also encourage users to independently evaluate the outputs, through human review, before relying on it. 

Lastly, users of GenAI tools should closely monitor the disclaimer and indemnification sections in the EULA. This is especially important where the GenAI output may be used in, or can find its way to, a company's external-facing products, services, or information. Some EULAs include disclaimers of all representations and warranties. Further, most EULAs contain provisions that favor and protect the provider. For example, Cohere's indemnification requires the user to "defend, indemnify and hold harmless the Cohere parties from and against any claims, causes of action, demands, recoveries, losses, damages, fines, penalties or other costs or expenses" arising from or in connection with use of their GenAI platform. Thus, users must thoroughly review the output before using, sharing, or replicating it externally to minimize the legal risks. This is especially the case as many EULAs state that the provider will not indemnify the user, while many others require the user to indemnify the provider. 

We are experiencing only the beginning of the widespread use and growth of GenAI systems. Users should review the EULAs of these platforms to fully understand their rights and protect against risks. Companies should also develop internal governance procedures covering GenAI use policies, risk assessment processes, and trainings to further mitigate potential legal risks related to the use of such tools.

Five Key Takeaways 

  1. Users must closely inspect the EULAs of each platform and version to specifically understand how the legal issues relating to outputs and inputs are treated.
  2. Careful consideration and review must be taken before inputting information to GenAI systems to avoid unintentional breaches of confidentiality, data privacy and security laws, and loss of IP, such as trade secrets.
  3. Regardless of whether a EULA assigns input and output ownership, numerous IP issues remain, such as GenAI inventorship/authorship, potential infringement of the output on third-party rights, and open-source software licenses compliance.
  4. Human review of GenAI outputs is crucial to avoid liability relating to inaccurate or harmful outputs.
  5. Companies need to develop internal governance procedures addressing GenAI to minimize risks.
Insights by Jones Day should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information purposes only and may not be quoted or referred to in any other publication or proceeding without the prior written consent of the Firm, to be given or withheld at our discretion. To request permission to reprint or reuse any of our Insights, please use our “Contact Us” form, which can be found on our website at www.jonesday.com. This Insight is not intended to create, and neither publication nor receipt of it constitutes, an attorney-client relationship. The views set forth herein are the personal views of the authors and do not necessarily reflect those of the Firm.