Insights

PUBBanner_LeadingTechnologyCompaniesAgreetoW

Leading Technology Companies Agree to White House's AI Safeguards

On July 21, 2023, the White House announced that seven leading technology companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—voluntarily committed to mitigating the risks posed by artificial intelligence ("AI").

Under the non-binding Voluntary AI Commitments on "Ensuring Safe, Secure, and Trustworthy AI," the companies pledged to adhere to a set of eight rules focused on ensuring that AI products are safe before introducing them to the public, building systems that put security first, and strengthening the public's trust in these products. Specifically, the companies committed to:

  1. Internal and external security testing of their AI systems before releasing them. This testing will be partly carried out by independent experts and is intended to guard against AI risks, such as biosecurity and cybersecurity. 
  1. Sharing information across the industry with governments, civil society, and academia on managing the risks associated with AI, such as by identifying best practices. 
  1. Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights, which are an essential part of AI systems. The companies agreed that model weights should be released only when intended and after security risks are evaluated.
  1. Facilitating third-party discovery and reporting of vulnerabilities in the companies' AI systems. This commitment is focused on establishing robust reporting mechanisms to promptly identify and correct issues. 
  1. Developing robust technical mechanisms to ensure that users know when content is AI-generated, such as by employing a watermarking system. This is intended to help promote public trust in AI by reducing the risks of fraud and deception.
  1. Publicly reporting the capabilities, limitations, and areas of appropriate and inappropriate use in the companies' AI systems. This public report will identify both security and societal risks. 
  1. Prioritizing research on the societal risks posed by AI systems, including the avoidance of harmful bias and discrimination, as well as protection of privacy.
  1. Developing and deploying advanced AI systems to help address society's greatest challenges, such as cancer prevention and climate change mitigation.

The announcement described the commitments as "intend[ed]… to remain in effect until regulations covering substantially the same issues come into force." The White House also announced that it is working on an executive order and pursuing bipartisan legislation to further regulate AI. Companies should closely monitor these developments as the Biden Administration has signaled that AI regulation is a key priority.

Insights by Jones Day should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information purposes only and may not be quoted or referred to in any other publication or proceeding without the prior written consent of the Firm, to be given or withheld at our discretion. To request permission to reprint or reuse any of our Insights, please use our “Contact Us” form, which can be found on our website at www.jonesday.com. This Insight is not intended to create, and neither publication nor receipt of it constitutes, an attorney-client relationship. The views set forth herein are the personal views of the authors and do not necessarily reflect those of the Firm.