Insights

JD Talks  Global AI Regulations  An Overview 01

JONES DAY TALKS®: The Rise of AI Regs: Approaches from the European Union and United States

As technology and regulatory frameworks evolve, artificial intelligence ("AI") legal issues have emerged as a key topic in transactional, litigation, and regulatory compliance contexts. Jones Day partners Laurent De Muyter, Carl Kukkonen, Stefan Schneider, and Emily Tait discuss the European Union's implementation of a comprehensive framework for governing the flow of data, digital services, and AI, while the United States is still exploring regulation.

Podcast: Play in new window | Download

SUBSCRIBE TO JONES DAY TALKS

Subscribe on Apple Podcasts

Subscribe on Android

Subscribe on Google Play

Subscribe on Spotify

Subscribe on Stitcher

LISTEN TO PREVIOUS PODCASTS

A full transcript follows below:

Dave Dalton:

Previous JONES DAY TALKS® podcasts have focused on how regulators move to introduce new laws to address rapid changes in technology, because, invariably, the tech runs ahead of a regulator's ability to understand and anticipate what constitutes appropriate oversight as new technology emerges and evolves.

Recently we've seen this happen with telemedicine and with blockchain and cryptocurrencies, just to cite a couple of examples. Now Jones Day has published a much anticipated white paper, Rising Global Regulation for Artificial Intelligence. We have an international panel of our lawyers here to talk about what they wrote about. I'm Dave Dalton. You're listening to JONES DAY TALKS®.

Based in Detroit, Jones Day partner Emily Tait has more than 20 years' experience handling high-stakes patented technology, copyrighted works, trade secrets, data, and trademarks matters. She also co-chairs Jones Day's global cross-practice autonomous vehicles, artificial intelligence, and robotics team.

Partner Stefan Schneider works primarily out of the firm's Munich office. He is a private equity and venture capital lawyer with more than 15 years of experience advising investors, businesses, and managers on complex domestic and multinational transactions.

Based in Brussels, partner Laurent De Muyter is an EU regulatory lawyer whose practice focuses on electronic communications, digital, data protection, and antitrust regulations. He ensures that clients are regulatory compliant and also represents them in challenges before the EU Commission and national regulators in courts in relation to infringement procedures and related damage claims.

Carl Kukkonen has more than 25 years' experience helping clients build litigation-ready patent portfolios, minimize risk through invalidity and freedom-to-operate analyses, and develop and implement litigation strategies including inter partes reviews. Carl, who's based in San Diego, is co-leader of the firm's artificial intelligence team. Panel, thanks for being here today.

Laurent De Muyter:

Thanks, Dave.

Emily Tait:

Thanks Dave.

Stefan Schneider:

Thank you, Dave.

Dave Dalton:

Let's just jump into the questions right away. We've got a lot of content to cover, so let's get rolling. Let's start with Stefan. We're following up on the white paper that the firm published late last year, Rising Global Regulation for Artificial Intelligence. Lots to talk about there. But can you talk about what's going on in the EU, the intensive regulations that are in consideration there? What's being proposed?

Stefan Schneider:

Yeah. You're right, Dave. Last month have certainly been very intensive in the fields of artificial intelligence. But when you look at the legislative efforts of the European Union, I would not so much call them intensive. I would rather call them ambitious and a long time in the making.

They've started their legislative efforts years ago. It's part of the second digital agenda, which was started in 2020 and supposed to cover until 2030. The plans aim for no less than creating a uniform playing field without national borders for a continent covering 450 million people and the second largest economic block in the world.

What the EU proposes in legal terms is a set of harmonized building blocks as an artificial intelligence act, as an AI Liability Act, as a Data Governance Act, that will be a Digital Services Act, a Digital Markets Act. Together, they will form a comprehensive framework which will govern the flow of data, digital services, and artificial intelligence. That's in turn going to shape our future economies and societies in Europe.

Dave Dalton:

What's the tentative time table for this again? When might this come into force?

Stefan Schneider:

The preparation started already in 2020 with an AI white paper. The AI Act is going to come into force in 2023. That's the plan at least.

Dave Dalton:

All right, right around the corner. Laurent, is there anything you'd add to his remarks?

Laurent De Muyter:

Yes. I mean Stefan's absolutely right. I mean the wave of regulation is unprecedented in the EU. The EU clearly wants to set a global standard for the digital sector. If we look at in terms of AI, I mean maybe we can, I think, classify the regulation in three streams.

I mean the first one aims at making data available for deploying AI. As you know, I mean data is the fuel of AI. And so, there's a series of regulation that Stefan already point about which have been adopted.

So, for example, they have the free flows of data, which prevents member states from adopting data localization laws. They have also the Open Data Directive, which is already adopted to make sure that public data is made available to AI developers. They have the Data Governance Act, which creates its trusted brokers for AI.

The EU has also proposed a Data Act, which should make data transfer compulsory in B2B situation. They also wants to create data pools in certain sector, like in the health sector, where the sector data will be available to the industry. So that's the first step is really making the data available for the AI.

The second relates to what type of data can be put in the EU market, and that's the AI Act, which adopts a risk-based approach. So making some data prohibited in the EU, some AI must go through some conformity assessment procedures, and some AI needs to have some transparency obligation.

Then the third stream relates to liability, where you want to have some specific crews which have been proposed, in particular AI Liability Directive and the review of the Product Liability Directive. The aim there is to tackle the issue of the black box that we see with the AI.

So I mean all those proposals aims to make sure that the EU doesn't miss the opportunity for industrial data, because there's a clear feeling in the EU that, at least in terms of consumer data, the EU has missed the stream.

Dave Dalton:

Clearly very thorough. Lots of activity going over there. Let's swing over to Carl for a second. Carl, talk about what's happening in the United States at a high level in terms of potential federal regulations for AI.

Carl Kukkonen:

So at the moment, there are no anticipated federal legislation pending which governs the use of AI in various systems in general, overall. The White House promulgated its AI Bill of Rights in late 2022, which set out cornerstones to consider when implementing an AI system, which is not a law.

More recently, the National Institute of Standards and Technology, aka NIST, released an AI risk management framework. Both of these documents, again, are largely directed to best practices for agencies and companies to employ.

So that being said, on the federal level, agencies have promulgated some regulations to protect aspects for consumers, for the population as whole, regarding the transportation system, the use of AI in medical devices, use of AI in housing and finance to avoid consumer bias and the like.

Also, interestingly, the Department of Commerce also recently emitted its export control regulations to explicitly cover certain AI technologies. So acknowledging the importance of this technology for the US and making sure that it is taken outside the US only in limited circumstances.

Dave Dalton:

Okay. So it sounds like there are lots of pocket of activity. But in terms of kind of an umbrella set of regulations in the US, there's nothing on the horizon.

Carl Kukkonen:

Yeah. We're seeing state laws that are directed to particular applications of AI, in particular industries being implemented. But as far as I can see right now, there isn't going to be anything similar to what's being contemplated in the EU with regard to regulation of AI systems overall.

Dave Dalton:

All right. Picking up on a point that was brought up earlier, let's switch focus for a second on how companies using AI should properly handle sensitive information. Let's swing over to Emily. Talk about the potential risks and maybe some best practices, ways of handling this information properly.

Emily Tait:

Sure. Fundamentally, with respect to anytime we're talking about AI, it's a term that's so broad and it's constantly evolving. And so, in terms of best practices and the risks, there are some sort of general thought bubbles that apply across industries and across companies. But at the same time that is always changing, and it is industry-specific depending on the demands or requirements of that particular industry.

As an IP lawyer, with respect to sensitive information, one thing that jumps out at me, of course, is trade secret information and company confidential and proprietary information. When there is such information, and most companies, of course, have information that's closely guarded, there's a concern about maintaining that information, the secrecy of that information.

And so, that is one top-level, I think, risk in terms of managing employees and other personnel who may have access to this information and how they're using it or inputting it into AI systems. So that's one risk. You don't want to eviscerate trade secret protection or make your confidential and proprietary information publicly available or available to anyone that you don't want to have access to it.

Then, of course, on the other hand, there's the issue of data security and privacy. Of course, the particular concerns for data privacy and security are going to vary by industry. But I think depending on what the industry is, there should be an awareness as far as the protection of such data, an awareness of how that data is being used and potentially inputted into AI systems or used by AI systems.

Another general thought in terms of managing those types of risks is having an auditing system in place, having this be a top-level issue for corporate governance to understand, where is the data being used? Is the use of that data compliant with any applicable law or regulation, and is it compliant also with perhaps the private policies of that particular company.

PART 1 OF 4 ENDS [00:10:04]

Dave Dalton:

And the use of AI. I mean, companies always have had this duty to protect sensitive data and so forth. Why is layering in artificial intelligence applications, why does it make it more complicated I'm wondering?

Emily Tait:

Well, because AI is this broad category and it's constantly evolving, the ability for an AI system to process data and data sets is increasingly sophisticated, and in some respects that sophistication lends towards greater challenges in managing how the data is being used. When data was simply used by human beings, there was a way to more effectively track that. So as AI systems particularly become more sophisticated in how they mine data and process that data, that level of human oversight and awareness of what exactly is going on with the data becomes more challenging.

Dave Dalton:

Sure. Laurent, can you talk about where the EU is in terms of protecting data and sensitive information? How are the regs and standards there?

Laurent De Muyter:

Yeah, I think where the EU really is a bit ahead probably of the US, is clearly around the issue of privacy protection and data protection. The point is with AI, the value of AI is often in processing personal data because you get much more value out of that. You may also have the AI which makes correlation not expected at the beginning, but which will use the personal data. And as soon as you use personal data, then kicks in the EU what you probably know is the GDPR so the general data protection, which regulates the processing of personal data across the EU.

So you cannot do what you want in the EU with personal data. For example you need to have the individual consent or another legitimate legal basis just to process the data. You also need to be transparent with what you do with the data and the data needs to be accurate. The processing also needs to be limited in relation to the purpose that you've identified, including for the retention periods of the data. So you cannot treat in data it's not necessary for the reason for what you collected.

While for AI, obviously in some situation the benefits is to have lots of data and to cross the data so that you make those correlations. So you want to keep the data as much as possible. That's not possible in the EU. There are also specific rules for sensitive data like health related data or biometric data, which can be used in the EU.

A specific point of attention in the EU is also transfer of data. The EU restricts the transfer of EU personal data outside the EU, and European Court of Justice has already struck down twice agreements between the EU and the US allowed US companies to transfer data between these countries. So this could apply even if you process EU data from outside the EU.

So you see Dave, the EU is already a global standard sector for privacy regulation, and that is you need to take that into consideration each time AI is processing personal data.

Dave Dalton:

Interesting. One part of the white paper I thought was particularly compelling, they talked about unintentionally bringing biases or discrimination into certain processes when AI is being used. Emily, let's talk about that and whatever the jurisdiction, how can you make sure that the AI you're using in a process doesn't unintentionally bring about bias in a process?

Emily Tait:

Thanks, Dave. The issue with bias is understandably one that is deeply concerning when people hear about artificial intelligence. We know that the way artificial intelligence works is by processing data and data sets and then making innocence in predictive analytics or predictions about how something may proceed. We see that all the time in terms of direct advertisements for products and services that we might find appealing or interesting, and sometimes that can be quite useful or introduced to a product or service that we didn't know about but somehow the algorithm has determined that people like us might find those products and services interesting.

Where it becomes obviously disturbing and deeply concerning from a legal and ethical perspective is when the AI is functioning in such a way that it leads to bias, biased outcomes, impermissibly biased outcomes legally or ethically. There's been a variety of well publicized cases of this happening with sophisticated corporations.

One example of that is in the context of hiring applicants. If the AI system is making predictions on the types of employees that would perform well at a job and the data that it is basing that analysis upon has built in biases, whether those are intentional biases or unconscious biases, it can result in perpetuating bias by teeing up candidates who have particular characteristics that have previously shown to be successful in candidates or employees of a company. If that company has historically hired white males with a particular degree or from particular universities, you can see how that would be perpetuated.

Obviously that's a simplistic example because there could be deeper patterns in the data that the AI could detect and therefore perpetuate. So it's obviously concerning. There is a deep concern about, if not completely eliminating bias, which may be aspirational, how does one develop and utilize AI systems that substantially reduce impermissible bias?

Dave Dalton:

Sure. Let's go to Carl. Carl, given what Emily was just talking about, are there steps a company can take to help prevent bias in the AI enabled applications, whether it's an application for a job or a loan or whatever, what can a company do in good faith?

Carl Kukkonen:

So the companies need to look at, or those deploying AI need to look at, those designing, developing, and deploying AI systems need to take proactive and continuous measures to protect those that might be affected by discrimination and to look at all three of those stages, namely the design stage, the development stage, employment stage, and make sure that the system is not operating in an unintended fashion.

So when in the design phase, you have to make sure that there aren't certain measures, like Emily said, people from a certain college, that could be a proxy indicator for a subpopulation and so that may be in inadvertent way of discriminating, so you need to avoid those kind of things. If you're using a training data set, you want to make sure it's as equitable as possible.

And then certainly after the system's launched or otherwise deployed, you need to continually monitor to make sure it's not biasing or having some unintended effect with regard to a certain subpopulation in a discriminatory fashion. There's also pushes for transparency to give more information regarding these kind of processes, having independent auditing, having disparity testing results and the like made available to the public so that they can essentially pressure test what's being done by these companies to make sure that they are indeed fair.

Dave Dalton:

Yeah, and hearing your talk, it sounds like part of the battle would be road testing the application and then you see what kind of applicants or applications are coming in and who ultimately, looking back after you even get through beta testing or something, would be key there, wouldn't it?

Carl Kukkonen:

Yeah. No, well I think it's continuous testing. But yeah, in the development stage you're going to be doing beta testing and then certainly in the deployment stage you're going to continually monitor what are the outcomes of the AI systems and whether they discriminate against a certain subpopulation. Like Emily said, you're not going to avoid discrimination, but you want to minimize it as much as possible.

Emily Tait:

Just another comment on the issue of bias and the concerns regarding AI. It's worth also pointing out that AI can be used to reduce bias and to detect unconscious bias that humans may have no knowledge of whatsoever, either personally or their in their organization. So while we are obviously rightly concerned about the perpetuation of impermissible bias, a bright spot is really thinking of the ways in which AI can be used in conjunction with human intelligence to yield better outcomes in terms of bias. And that's something that obviously companies across the board are looking into. The desire here for AI is to get to something better than what a human being could do alone, and it's important to recognize that AI systems can be used to reduce bias as well.

Another point on that is just obviously there's so much concern about bias. It always makes me question... A lot of times when people are raising this concern, it's as if human beings have been fantastic at making unbiased decisions, and of course historically we know that's not at all true, human beings have had major problems with that. So again, I think with AI it's the goal of using it in an ethical and legal manner to do better than what a human being or human-driven organization could do on its own.

Dave Dalton:

What are types of liability risks are there, Emily, associated with artificial intelligence?

Emily Tait:

So something of these liability questions are concerning because a lot of it is unknown, and because AI is evolving and changing shape all the time with the technological developments-

PART 2 OF 4 ENDS [00:20:04]

Emily Tait:

The questions of liability remain immense and unknown, which is shaky territory for organizations to operate in. As an IP lawyer, I mean, one of the things I think about is inadvertent encroachment or infringement of IP based on reliance upon an AI system to generate code, for example, computer code or some other output that is based upon the intellectual property of a third party. So that's one area.

Obviously, there are other issues of potential liability. You can imagine a scenario where AI is used to generate some type of output and that output is deemed to be a reflection of company policy, and that policy may present a legal issue or it may just be at odds with the company's official written policies and procedures. In terms of liability enforcement issues, there's questions about who is going to be deemed to be the actor. So if you're a party that has been harmed by something, you're trying to prove up causation, who is the actor that would be accountable or liable for whatever grievance that you have? Would it be the algorithm creator, the software producer, the AI system, or the AI user? So there's so many challenges there, but those are just a handful that kind of come to my mind.

Dave Dalton:

You mentioned enforcement. We haven't talked much about that. Let's go to Laurent first asking about enforcement measures in the EU. Laurent, what might happen there in terms of enforcement?

Laurent De Muyter:

Well, currently in the EU enforcement is largely up to independent data protection authorities in each member states. They can impose fines for violation of GDPR at this stage, which is the main legislation which has been adopted and has an impact on AI. And the fines can be significant because they can go up to 4% of the global turnover, so pretty much like we see in antitrust. We see more and more enforcement and the amount of the highest fines imposed is increasing regularly. In AI, there have been case, for example, in the case of data breach or the unauthorized use of facial recognition, the same regulatory oversight is expected with the AI Act, which even increased the maximum level of the fine to 6%.

Dave Dalton:

Wow.

Laurent De Muyter:

So the enforcement is essentially national, you see, but the rules are European. And so there are some coordination at the EU level. So that's the current situation we see as regulatory lawyers. But there is an important trend for the future, which is potentially class action before national courts. So far class action are less developed clearly in the EU than they're in the U.S. However, the class action directive will need to be implemented by member states by middle of this year, and it could facilitate the launch of a class action in the EU because it would cover both the GDPR and possibly also AI Act infringements. There will remain one big difference between the U.S. and the EU is that we're not having or never have treble damages in the EU like we have in the U.S.

Dave Dalton:

Interesting. If we can pick this up with Carl and talk about what happens in the United States after there's an action, what kind of enforcement? What happens in a situation here, Carl?

Carl Kukkonen:

So at the moment, I mean, there's still a patchwork of state laws and some federal regulations that govern the use of AI rising out of some of the liability issues that Emily and others have mentioned. So enforcement might relate to state or federal laws relating to employment practices. We've seen a lot of laws with regard to employment practices on the state basis. They may fall under particular state data privacy laws or federal laws such as HIPAA, or the Fair Credit Reporting Act, or largely they fall under general tort laws. And so if someone's been harmed and they can show causation like Emily mentioned as one of the issues, then that may be a basis for enforcement and some sort of liability with regard to, for example, a malfunctioning AI system.

Dave Dalton:

So the fact it was an AI-related incident doesn't necessarily make a difference. You mentioned that it can fall under other statutes that are out there, and general tort law, and so forth then, right?

Carl Kukkonen:

Yeah. If you're an automotive manufacturer and you use an AI system and the automobile fails in some fashion and causes injury to a driver or passenger, the manufacturer may still have the same liability would have if an axle broke inadvertently from a poor manufacturing process.

Dave Dalton:

Sure. Carl, let's stay with you. Talk about protecting intellectual property, again, as it relates to this conversation in AI. Does AI bring out a new set of challenges in terms of IP protection, something different here?

Carl Kukkonen:

In general, AI innovations are going to be treated in the same manner as other software and/or systems that are computer-based when it comes to patent protection and to some extent as well as copyright protection. On the software front, many patent applications and patents have had difficulty overcoming issues with regard to patent subject-matter eligibility. So it will be easier for AI systems to get that type of protection if they are part of a practical application or a process that ties into some sort of a real-world process, could be a manufacturing process, could be a security process of some sort. And as long as it's ties into the real world or is practically applied, then there is an ability to get patent protection.

In some cases, the underlying models used by AI are all open source otherwise known, and so the true value for many companies comes from the training data sets. In some cases, these data sets can be protected under copyright, maybe more so outside the U.S. than in the U.S., but many companies are opting to protect their training data sets as trade secrets. That being said, the continuing push for AI transparency, it's kind of opposite tension with regard to trade secrets because they want you to provide information with regard to your systems, probably including what's features or data sources are being used for your training data sets for audit and other purposes to make sure that the systems are working safely and are not discriminating against certain subpopulations.

There are other issues that arise from AI invention such as level of detail that is required in the disclosure, also known as enablement, inventorship issues, which other folks have touched upon, as well as the level of expertise of the person of ordinary skill in the art to be considered when evaluating patentability issues of a patent application.

Dave Dalton:

I'm going to follow up by going to Emily Tait for a second. Emily, we've talked on previous programs about if a human doesn't create something, who owns it, that kind of thing, and how do you protect that? But what about a case when artificial intelligence creates IP? How is that protected, or how is that treated, Emily?

Emily Tait:

So your question is an interesting one. If you think about intellectual property as that is traditionally defined, it is property of one's intellect, creations of the human intellect. And so working with that definition, AI, if it were to create a work or an invention that would be regarded as intellectual property if it were made by a human author and inventor, if that were to happen, the question is it even intellectual property if it's not the product of human intellect, if it's not a creation of human intellect? And so right now, that's a perfect example where our laws have not really caught up to the technology.

As it currently stands, if you have an AI system creating a work or an invention without a human author or inventor, that is not protected as a copyrighted work because copyrighted works require a human author. And similarly, with an invention, an AI system without a human inventor is not creating a patentable invention under our current laws. To be an applicant in the United States for a patent, you need to be a human being and not an AI system. It's very interesting. There are certainly examples, many examples of AI creating works or inventions that had it been a human author, an inventor, yes, it would be intellectual property. As an example, an artificial intelligence system creating an artistic work, software code, musical composition, all sorts of things.

But as I said, at this point, our laws haven't caught up to the technology. And so this is an area that's going to be subject to near-constant discussion. And I anticipate there will be changes in the years to come to address this, but time will tell.

Dave Dalton:

Yeah, sure. And it's fascinating when you think about it because obviously as you just pointed out, when the copyright laws and trademark laws were written, no one envisioned anything like artificial intelligence. They couldn't account for that when they were putting these regs together decades ago, I'm sure. So will be fascinating to watch. One section of the white paper that was also great interest, I thought, we talked about M&A transactions and AI. Let's go to Stefan first. How has AI accounted for or valued in a transaction?

Stefan Schneider:

Yeah, thanks, Dave. That's actually not an easy answer to give in the abstract because as we've mentioned a couple of times already, there's not one AI. It always depends on its use and its nature. And when it comes to the use, we first ask the company selling itself, or the founders, or the startup, whether their AI-

Stefan Schneider:

 is revenue generating, because in that case, if the use of the AI is not legitimate because it's a prohibited use or a high risk AI system, then the product or the service based thereon may need to be reworked and maybe the business model may have to change. And believe it or not, we have seen this in the majority of the cases where we have done due diligence on AI systems.

But even if an AI system is not revenue generating, the risk alone from using AI, bias, data protection, other forbidden practices, those could create a liability for the company or prevent the future use of the AI system. So these are high stakes questions that you need to look up and do due diligence.

PART 3 OF 4 ENDS [00:30:04]

Dave Dalton:

Sure.

Stefan Schneider:

And when it comes to the nature of AI, you see AI is both flesh and fish, so to say. It has a software component and it has a data component. And for both components, you need to ask the questions where the components come from. Who wrote the code? How was the AI algorithm developed? Where did the training data come from? Was its use permitted? And we often speak to founders or software engineers. The answers can be highly technical, which is why I'm immensely grateful to have colleagues such as Carl, Emily, Laurent, and many others within Jones Day who fluently speak that language and can pick up nuances that I would miss.

Dave Dalton:

Sure, sure. You mentioned Laurent. Let's go to him. Laurent, does AI potentially bring specific regulatory issues in the context of a transaction M&A?

Laurent De Muyter:

Absolutely, Dave. Absolutely, Dave. I mean, there's one side that Stefan has talk about which relates more to the due diligence we need to identify or the regulatory risk, which can be linked to the development of the use of AI.

As an antitrust lawyer, one point I'd like to flag, the impacts of AI on regulatory approval. And are essentially two aspects to this. First AI is a source of innovation, and so there is an increasing scrutiny of these by competition authority when these relates to acquiring an AI company. In particular, the EU has established a system whereby a transaction will be reviewed under competition even if they fall behind the threshold for review.

The purpose of the system aims at addressing the so-called killer acquisition where, for example, you would have a big tech company which acquires a small innovative AI company. But of course, this creates uncertainty about the risk of regulatory review of a deal because in terms of timing, in terms of substance, of certainty of the authority that's going to be reviewed, that's going to review the deal under the competition law, there's much more uncertainty now than before.

And if they take up the case, the competition authorities are likely to focus on data markets and they're going to see whether the deal could trigger a monopolization, lock-in, or leveraging negatively affecting the customers.

The second important point is foreign direct investment. I mean, you know that in the US there's been the CFIUS review for a long time. But that was very particular. Now we see a spread of those regulations across the globe to set up FDI merger control. For example, in the EU today, there are 18 out of 27 countries which have recently set up such a system, and it's increased almost every month. So acquiring an AI business will most likely fall within the ambits of the FDI review because of the critical nature of AI. And so you will need to establish to look at very different elements to assess whether you need a review compared to a merger, a competition merger assessment.

So we need to look at where the place of the assets and subsidiaries of the company, the structure of the transaction, the identity of the owner, and of the ultimate owner of the AI company. So this is all new. It complexifies a lot the regulatory approvals of M&A because the systems are much more different across countries than in competition law where you have the major review process is more aligned between countries. So this increased the unpredictability of M&A regulatory approvals, both in terms of the outcome and in terms of the timing needed for the review.

Dave Dalton:

Sure, sure. And thank you, Laurent. And one more M&A question, Stef. And you had said you wanted to mention that a company needs to integrate use and development of AI within their internal compliance program and M&A processes. Can you sum up what that means exactly? I had a note here that we want to make sure we touched on that.

Stefan Schneider:

Thank you, Dave. I would definitely say that it pays for any business to have an overarching AI strategy. Just to be clear, within your own organization, what would be the aim, the potential benefit of using an AI, and how would your business have to prepare the people and the processes for the adoption of the AI system? Is there may be an area where you can safely and with little effort tip toe into the waters of AI?

And as part of that overarching strategy and internal compliance program would need to address limitations and the risks emanating from the use of AI. We've heard a lot about that today, not only in terms of bias, explainability, data protection, but also regarding availability and reliability of data. And specifically in terms of M&A processes, one of my core tenets is that you always need to think through an M&A process from beginning to end, meaning also think of the post-merger integration of an AI system into your own IT landscape. Do you have to change data collection routines? Do you have to have an IT integration, a data migration? So those are issues worth thinking very early on in an M&A process.

Dave Dalton:

For sure. I was looking for a way to wrap this up with the notes everyone sent over in preparation. Emily had stated that AI raises board type issues. Emily, what does that mean? That sounds real high level to me.

Emily Tait:

Yeah. So obviously AI has been around for many decades, but in the past few years and in particular the past six months, the discussion on AI has reached a favor pitch. And companies are increasingly being presented with the business case, the business opportunities, the ways in which AI systems can be utilized to enhance their business in real ways to enhance consumer's experience of their business, et cetera.

At the same time, the challenges and risks, illegal and ethical concerns, many of these issues that we've talked about today, present a great deal of concern for companies in terms of how do they deploy AI within their own organization? How do they guide their employees on how to legally and ethically use AI and also to do so in accordance with company policy? And so for all of these reasons, AI is becoming a significant issue of corporate governance and one in which there's going to be an increased demand for board oversight.

So all of these issues, I think will be pressing and it'll be very dependent on the company and the industry and everything else, but it'll be an increasing issue for boards to deal with.

Dave Dalton:

Absolutely. We will leave it right there. Panel, thank you. The name of the Jones Day white paper is Rising Global Regulation for Artificial Intelligence. You'll find a link to that white paper wherever you're listening to this podcast, as well as contact information for all of our panelists. So if you need more information or have questions, please reach out. Everybody, thank you so much for being here today. Great program.

Stefan Schneider:

Thanks, Dave.

Carl Kukkonen:

Thank you everybody.

Laurent De Muyter:

Thank you. It was a pleasure.

Emily Tait:

Thank you.

Dave Dalton:

Okay. Be sure to check out the white paper that instigated the conversation you just heard, Rising Global Regulation for Artificial Intelligence. There should be a link to that publication on the page you open to hear this program. And visit jonesday.com. You'll find our insights page with more podcasts, publications, videos, blogs, and other helpful and useful content. The website also has complete bios and contact information for the four panelists you heard on today's program. JONES DAY TALKS® is produced by Tom Kondilas. As always, we thank you for listening. I'm Dave Dalton. We'll talk to you next time.

Announcer:

Thank you for listening to JONES DAY TALKS®. Comments heard on JONES DAY TALKS® should not be construed as legal advice regarding any specific facts or circumstances. The opinions expressed on JONES DAY TALKS® are those of lawyers appearing on the program and do not necessarily reflect those of the firm. For more information, please visit jonesday.com.

PART 4 OF 4 ENDS [00:39:10]

 

Insights by Jones Day should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information purposes only and may not be quoted or referred to in any other publication or proceeding without the prior written consent of the Firm, to be given or withheld at our discretion. To request permission to reprint or reuse any of our Insights, please use our “Contact Us” form, which can be found on our website at www.jonesday.com. This Insight is not intended to create, and neither publication nor receipt of it constitutes, an attorney-client relationship. The views set forth herein are the personal views of the authors and do not necessarily reflect those of the Firm.