Insights

Paradise Lost  Court Says AIGenerated Work not

JONES DAY TALKS®- Paradise Lost - Court Says AI-Generated Work not Copyrightable

A federal district court held in Thaler v. Perlmutter that an AI-generated image, “A Recent Entrance to Paradise,” cannot be copyrighted due to the lack of sufficient human contribution to its creation. Jones Day partners Emily Tait and Carl Kukkonen talk about the implications of the decision, the questions that remain, third-party complications, and what GenAI users need to know.

Podcast: Play in new window | Download

SUBSCRIBE TO JONES DAY TALKS

Subscribe on Apple Podcasts

Subscribe on Android

Subscribe on Google Play

Subscribe on Spotify

Subscribe on Stitcher

LISTEN TO PREVIOUS PODCASTS

Read the full transcript below. 

Dave Dalton:

Jones Day partners Emily Tait and Carl Kukkonen are here to talk about the recent federal district court decision in Thaler v. Perlmutter and what it can mean for copyright protections involving GenAI. I'm Dave Dalton, you're listening to Jones Day Talks.

Based in Jones Day's Detroit office, partner Emily Tait is a trusted advisor in complex intellectual property and technology matters. She's a leader of Jones Day's global autonomous vehicles, artificial intelligence and robotics team, and she is sought out for her experience and thought leadership on legal and ethical issues posed by artificial intelligence. Carl Kukkonen, out of Jones Day's San Diego office, has more than 25 years experience assisting clients in developing strong, litigation-ready patent portfolios. As a member of the firm's AI team, Carl's proficiency spans high technology, life sciences, and energy sectors. His high-tech experience encompasses AI technology such as machine learning, natural language processing with large language models, and computer vision. Emily, Carl, thanks for being here today. Hey, Carl, Emily, thanks so much for being here today.

Carl Kukkonen:

Hi, Dave.

Emily Tait:

Hey, Dave. Thanks for having us.

Dave Dalton:

You're welcome, you're welcome. I've been looking forward to this one for a while, since you approached me a couple of weeks ago with this idea. I think it's a fascinating topic. We are talking about generative AI, and as it relates to copyright protections. In particular, a federal district court decision came down several weeks ago, I hope I'm pronouncing this right, Thaler v. Perlmutter. Emily, can you give us some background on the case and what happened there?

Emily Tait:

Sure, absolutely. And you're right, this topic is super interesting, and generative artificial intelligence has obviously been the talk of the town for this entire year, and this case illustrates some of the most interesting aspects of generative AI, as it relates to copyright law, and just how disruptive it is to traditional notions of copyright. Because in this case, Dr. Thaler is a technologist, and he's kind of become a bit of a famous figure in the world of artificial intelligence because he's pushed the envelope a bit with respect to both... On the copyright side and on the patent side. And he's done that in the case of copyright by filing applications for copyright, wherein he's listed an artificial intelligence tool that he created and owns, in this case called the Creativity Machine. He listed that artificial intelligence tool as the author of the work in question, and the work in question was called A Recent Entrance to Paradise, say, an image of train tracks, with a tunnel, and some types of greenery, and rocks sort of around it.

It's the type of work that had it been generated by a human being, it would be eligible for copyright under US law. But as I said in this case, he listed his AI tool as the author, and then listed himself as the owner of the artificial intelligence tool. He was asserting himself as the owner of the work, sort of under a work for hire type theory. And the copyright office denied registration of that work, and then he filed for reconsideration, and the copyright office denied registration of that work again, explaining essentially that the work did not meet the human authorship requirement of US copyright law. To have a copyright, you need to have a human being who's authored the work in question. And so, it was denied on that basis.

Dave Dalton:

Sure.

Emily Tait:

The case then went up to the district court, and the district court considered it and ultimately affirmed the copyright office's decision that human creativity is at the heart of our copyright law, and this work did not meet that requirement of human authorship. So, very interesting case to push some of these issues to the forefront of the discussion.

Dave Dalton:

It surely is, and when these copyright laws were written how many decades ago, no one envisions something like this, to complicate matters and copyright ability and so forth. Carl, Emily mentioned the human authorship standard, could you explain a little bit about what that is exactly, and how it applies to this case? Human authorship?

Carl Kukkonen:

Sure. Human authorship arguably finds support in Article I, Section 8, Clause 8: of the Constitution. So that reads, "Congress shall have the power to promote the progress of science and useful arts by securing for limited times to," keyword here, "authors and inventors the exclusive right to their respective writings and discoveries." And there's been challenges in the past about who constitutes an author, and it's traditionally almost always been a human being. And with Dr. Thaler's challenge, similar to his challenges in the patent context, here, the decision in this case was not surprising because US law and copyright guidance, like Emily said, it's well established that copyright can only protect material that's a product of human creativity, and the term author, which is used both in the Copyright Act and the Constitution, has always excluded non-humans.

So, when you were talking about sufficient human authorship, they look at what input a human is doing. So are they modifying or arranging the AI generated material in a sufficiently creative way? One example is, you may have a collage of AI generated images, and maybe individual images aren't protectable, but maybe it's a collage and together and the arrangement of them by human being, which might make that protectable.

Dave Dalton:

Sure, sure.

Carl Kukkonen:

And one of the things we're going to dive into a little bit later is the copyright office is trying to bifurcate some of these applications for registration, with subject matter which is done or created by human versus subject matter that's created using one of these GenAI platforms.

Dave Dalton:

Sure, sure.

Emily Tait:

Dave, I just want to add a little to that, what Carl just said. It is important to note that in this case involving Dr. Thaler, his application for copyright, he was clear that this AI tool generated the work autonomously. And the district court noted in its decision that in the district court briefing he had pointed out some different tweaks that he had done to guide the AI tool in some manner, but the court said, look, this isn't on a record. The record is that he was submitting this work and identifying the AI tool as the author, and so that registration didn't really parse out the pieces of the work that Dr. Thaler was contending, that he authored, as compared to what the AI tool did. And that was the reasoning of the court.

Dave Dalton:

Is it possible, and please forgive me if this sounds, not cynical exactly, but you mentioned Dr. Thaler has brought cases previous to this one, is it any possible way that he was maybe floating this as a test case, just to see maybe where the line was, trying this? What do you think?

Emily Tait:

I think it's hard to know exactly what his motivations are other than he obviously is someone who's very much vested in AI tools and their ability to create. In this case, the work is something that he obviously feels should be covered by a copyright, that it's an original work and that it should be assessed according to the standard authorship requirements certainly, but that authorship requirement should be expansive enough to include an AI tool. And interestingly, in the decision, the court does note that the Copyright Act itself doesn't define the word author, but yet the court said, look, while that's the case, if you go back to centuries of settled understanding of what authorship means, that includes and requires a human being. Citing the US Constitution's copyright clause, which Carl mentioned earlier, in terms of incentivizing authors and inventors to create. And the court says, look, we got centuries of settled understanding have concluded that that's a human being. Dr. Thaler's motivations, who knows exactly, but in the context of this case, he certainly was of the opinion that this was a work that was worthy of copyright protection under US law.

Dave Dalton:

Emily, let's stay with you for a second. You mentioned Dr. Thaler has brought other applications, were there other high profile cases, or decisions, I guess, where human authorship, or the lack of human authorship impacted a decision? Are there other cases that come to mind?

Emily Tait:

Yeah, there really aren't. And it's funny, Davey and I recorded something a number of years ago on the monkey selfie case, right? And this was Naruto v Slater, it came up to the nineth circuit, so I think it was in 2018-

Dave Dalton:

Five years ago, Emily. It was May of 2018. And I never thought we'd come back around to the monkey selfie podcast, but darn it, we did. So, congrats.

Emily Tait:

I remember then thinking, I think we discussed its implications in terms of artificial intelligence, so it's funny that these cases have now come to the forefront. But that case involved a monkey that essentially had taken selfies using a photographer's camera that had been set up. In that case, the Ninth Circuit decided that on standing grounds, that essentially a monkey does not have standing under the copyright office to bring suit for copyright infringement. But in that first decision is this notion of, what does it mean to having standing, and emphasizing involvement in terms of being an author under the Copyright Act, and that the Copyright Act is replete with references to human beings as having standing not a monkey. And so, at the time that decision was understood to have potential implications for AI because it emphasized the point that the benefits of our copyright law flow to human beings and not monkeys.

Dave Dalton:

Sure.

Emily Tait:

There's obviously been other cases over the years as well that have sort of touched on this issue. There was a case out of the Seventh Circuit, 10 or so years ago, Kelley v. Chicago Park, where the court refused to recognize a copyright in a cultivated garden. So, notwithstanding the fact that the gardener had arranged the plants and had a vision for how it would look, the notion was that there's forces of nature at work there, that ultimately result in the garden, and therefore it was not eligible for copyright protection. And there's been a number of other cases, it's surprising... Or perhaps not surprising, that you can imagine almost any different scenario that arises in our case law, people saying that a work was generated by a celestial being, or a spiritual voice in their head, and the copyright office and the courts say, no, no, no, those works are not eligible for copyright protection, you need to have a human author. So, it's something that courts have grappled with, and the copyright office has grappled with, for as long as our copyright laws have been in place.

Dave Dalton:

Sure. And that's why we're here talking today, AI is going to take this to a whole different level, I would think. Hey, Carl, can you talk about sufficient human contribution as it applies to satisfying these requirements? Is that as vague as it might appear to a lay person, by myself? Or where do you decide what's enough in terms of human authorship or human input or contribution?

Carl Kukkonen:

At this point, I don't think anybody would say we have a bright line rule of what is sufficient human contribution. From a practitioner standpoint, there was some guidance in March which explained, when considering an application for registration, the copyright office is going to look at whether the work is basically one of human authorship with the computer or other device merely being an assisting instrument, or whether the traditional elements of authorship in the work were actually conceived and executed, not by a man, but by a machine. And a lot of the subsequent notices in the copyright office seemed to be hinging on the fact that a user of one of these GenAI platforms doesn't really have any control over what the output might be. So, in one copyright office decision relating to a graphical text novel called Zera of the Dawn, it found that the text of this graphic novel was protectable under copyright, but the images ultimately were not. And that was based on a lack of sufficient human contribution.

One thing that was interesting about that particular notice is that it went back to an 1884 Supreme Court case, Burrow-Giles Lithographic v. Sarony, and that's when they first considered whether a photograph is something that's subject to copyright protection. And in that case, they looked at the independent judgment by the photographer in selecting and framing and developing the images, so they have some control over the output. And that was dispositive and deeming that photographs were protectable. And it seems like the copyright office and the courts are leaning the opposite direction, where, with most GenAI platforms, the user doesn't really have much control over the output.

Dave Dalton:

Sure.

Carl Kukkonen:

And in a later case, a digital artist, Jason Allen, who was using a platform called Midjourney, disclosed that he input numerous revisions and text prompts, at least 624 times, to arrive at an initial version of an image. He further explained that he used Adobe Photoshop to remove flaws and create new visual content, that was done with some manual manipulation of the images using Photoshop, and then he later used another platform, Gigapixel AI, to upscale an image which would increase its size and resolution. Ultimately, the copyright office deemed only that middle portion, the Adobe Photoshop manipulations, to be protectable because he had some control over the output, as compared to use of Midjourney, on the front end, or Gigapixel AI on the backend. And the copyright office asked the applicant to disclaim those portions of the images that were created using those first and third GenAI platforms, and the applicant refused to, and so the application ultimately was refused.

Dave Dalton:

That's got to be maddening for the applicant. You talk about splitting the baby, they're saying this part of it, the Adobe part, I think you said, Adobe Photoshop, that's okay, but what AI did prior to that, and then after, in terms of finishing the project, that's not okay. So, how does the applicant react to that? Do you just pull the whole application then?

Carl Kukkonen:

Yeah, we'll see where things end up. From an outsider standpoint, 624 prompts and inputs and refining an image, that does seem to be a lot of human contribution, even though you may not have full control over the output. The lack of a bright line rule is going to make these kind of cases difficult to get on either side of the fence, clearly.

Dave Dalton:

Yeah. I'll tell you what, I want to go back to Emily for a second, they say the devil's in the details sometimes. Emily, did you hear anything or read anything in the final opinion that caught your attention, in terms of that's a surprise, or maybe this is leading us to a point where we have some clarity moving forward? What in the actual written opinion struck you as significant, I guess?

Emily Tait:

Well, what struck me really was the lack of clarity in terms of these issues, where you just say it, the devil being in the details. And Carl touched on some of these, right? How do you parse out the piece of a work that a human created or authored versus the piece that the AI tool may have generated? And the district court actually acknowledges this, and says, look, we're approaching new frontiers in copyright, and artists are going to have AI in their toolbox now. And so, there's going to be all sorts of questions going forward. And notes things like, how much human input is necessary to qualify the user of an AI system as an author? Or the scope in production over the resulted image, et cetera.

So, the court's acknowledging all these complexities, but then ultimately I think concludes, look, but this case isn't so complex because here the applicant listed the AI tool as the author, and didn't go through this process of trying to parse it out. So, it's an interesting sort of contrast, the court is acknowledging the complexities that lay ahead, but at the same time saying, look, in this case, our job is relatively straightforward, because of the human authorship requirement, and the manner in which this application was submitted.

Dave Dalton:

I'll pick up on that point and go to Carl, and maybe going back to something Carl touched on earlier. The way I read the notes you provided for me, before we got all this going, it says here, "The copyright office relies on applicant disclosures to identify AI generated content." Now, Carl, how does that play out in the real world? It sounds almost like a glorified honor system to me, how does that actually work out?

Carl Kukkonen:

Other branches of IP law, there's a duty of disclosure for the copyright office. So, you need to notify them anything that has a bearing upon the preparation or identification of the work, or the existence of ownership, or duration of the copyright. What the copyright office is currently asking is that when you're submitting an application and you have to specify in an author created field, you have to specify what was human authored content and what was AI generated content. And it seems that that admission of what was AI generated content is they're going to take the position that that's not protectable almost upfront. And so, applicants need to be careful about how they word that, and also have good support about what activities and what interactions they have done with the AI platform to ultimately reach the output that they're trying to protect.

Dave Dalton:

Sure. And you mentioned, there had been something of a precedent before AI even, in the application. So, this wasn't something that the office came up with to identify AI generated content or work, there was part of the application prior to this that let you disclose that.

Carl Kukkonen:

Yeah, right. It falls under this general duty to disclose requirement, and they're, at least with the March guidance of this year, they're saying that AI generated content, you must disclose that under this duty.

Dave Dalton:

Okay. Going back to some of the pre-show, pre-program preparation, this actually appeared in the Jones Day commentary, about this topic, that we'll link to. But I'm reading here... Now, this is from the commentary, Emily, that you and Carl wrote. You wrote, "Serious questions remain as to when the output of GenAI tool can be found to be infringing on third party works, as numerous lawsuits have already been filed alleging that GenAI tools have scraped and thus unlawfully digested and copied third party works that are used to train the tool." Now, there's a whole extra layer of complexity. Now, we've been talking about, okay, is there at least a significant component of human authorship? Now we're talking about, perhaps even inadvertently, the Creativity Machine, or whatever Dr. Thaler was using, or anybody else's AI generated content, might be scraping, to use their term, someone else's work. And then you got a whole other problem. How concerned should applicants be about this? This is a big issue, I would think.

Emily Tait:

It is a huge issue fundamentally, Dave, you've hit on the key point in asking the question. Because you have the foundational question of protectability, which we've been talking about. When is a work that's generated by an AI tool, whether it's generated in whole or in part, when is that protectable under us copyright law? There's a threshold issue, and it really shows how disruptive GenAI has been to that threshold issue. But then, there's the other just foundational question of infringement. And this is something we're seeing almost every week it seems, there's a new lawsuit amongst prominent artists, authors, and creators, alleging that GenAI tools, the way that they're being used, fundamentally, constitutes an infringement. Obviously, the facts in all of these different cases may vary depending on the type of tool being used, and what the output is and everything else, but it's going to be really interesting to see how these cases proceed, whether courts make decisions, whether the decisions are ultimately handed to juries, but the fact of the matter is protectability and infringement, both of these key issues have been rocked by generative AI tools for sure.

Dave Dalton:

Sure, sure. And that's what copyright law is supposed to do, in its very essence, is protect someone's work. And now, we all just swerved in based on this technology, and a whole new territory there also. Carl, let's talk about clarity for a second, how do copyright applicants using GenAI, where do we get clarity, finally? Is it going to be a legislative matter? Do we need more case law? Where does this start to become clear?

Carl Kukkonen:

It's probably going to be a combination of some challenges of copyright office rejections, followed by some legislation to provide some clearer guidance to applicants. It does not seem that people are going to be... It doesn't seem that the adoption of GenAI is going to slow down in any fashion, and so this is going to be much more prevalent in the future.

Dave Dalton:

No doubt. And it's certainly sounding that way. And this conversation got even deeper, more interesting than I thought it would be, and I was expecting, a lot of great information to come out. Let's start wrapping up here though. Emily, based on Thaler, and the other information you have at this point, how are you advising clients that might run into this type of situation, where it comes to copyright protection?

Carl Kukkonen:

It really depends on, it's so fact specific. So, the type of generative AI tool or system that's being used, depending on that, you can have a lot of different issues flowing. But also, the type of client, the industry they're in, the manner in which they expect their employees or contractors to be using these tools, and for what purpose? It's all very fact specific in terms of the advice that you would give. So, a lot of clients are really struggling, and have been for this year, what to do about this, and what are the implications of this type of technology on their workforce. And we have gotten a lot more inquiries in terms of what should we be telling our employees? What are some key issues to look out for? And so, there are some guideposts, but it's also very, very individualized depending on the factors that I just mentioned, and also many others.

At an absolute minimum, being very familiar with the terms and conditions that are applicable to the particular tool is essential, because how employees input information into the tool could inform under those terms and conditions, how can that material then be used to train the underlying technology? Things like that. So, being familiar with the terms and conditions, and also really fundamentally understanding your workforce, how far does it reach, where are people located, and what they're likely going to be using these tools for. And obviously, having an organizational understanding of what's our risk tolerance, how do we want our employees to approach using these tools, to ensure that they're being used in a manner that's legal and ethical, and is managing risk appropriately within the organization.

Dave Dalton:

Sure, sure. A lot to unpack in a semi-heavy lift, I'm sure worth it, ultimately. Okay. Carl, regular listeners to Jones Day Talks, know that when I ask my panelists about a key takeaway that we're starting to wrap up the program. And we're starting to wrap up the program, so Carl, looking back at this discussion, what we've talked about, if there's one key thing, one most important thing that a listener should go away with, what would you tell them? What do they need to remember?

Carl Kukkonen:

When using a GenAI platform, and some of these legal considerations, especially with regard to copyright, are largely dependent on the function within the company and how that output is being used. If it's for a blog post, or maybe a marketing brochure, maybe having the ability to protect the output via copyright is not important, because it's going to be a temporary document, and it's unlikely that competitors are going to copy. However, if it's your source code and you're using GenAI to generate something that's very fundamental to the company, you run the risk of not having ownership over your full code base. And so, those are just a couple examples of being mindful of how you're using GenAI within the organization, and exactly how those outputs are going to be used, and whether having exclusive ownership of that output is going to be important.

I expect that the registration jurisprudence and legislation is going to continue to evolve, and there may be new disclosure requirements that require you to maybe log what the inputs are into these platforms, might require you to annotate certain portions as being generated by GenAI. And so, I would say that enhanced record keeping is probably going to be very helpful in the future for critical aspects of the company for protection of copyright.

Dave Dalton:

Great, great insights. Emily, same question, give us one key takeaway and wrap us up.

Carl Kukkonen:

Yeah. I think as I see it, one key takeaway is of course, GenAI is here, and it's here to stay, and yet it's also evolving constantly. The capability of the technology. And the law tends to move much slower than technology, as we all know, and so, I would advise folks to pay attention and see how these issues are resolved. Obviously by the courts, but also the copyright office may provide significant guidance that can help inform companies in terms of how to protect and enforce original works, et cetera. So, the copyright office has had an AI initiative for quite a while, and has been engaged in a really active dialogue with the public on these issues. Including most recently issuing a notice of inquiry, seeking public comment on really threshold issues linked to copyright and artificial intelligence.

And so, I anticipate that as the copyright office reviews the input that it has received on these issues, that it will provide continued guidance. And obviously no guidance is going to be perfect, no guidance is going to provide complete clarity in an area that's very murky and fact intensive, but it should be instructive and helpful. So, paying attention to those developments for companies and individuals and other organizations that are deeply involved in the creation of content, I think following those developments is going to be really key.

Dave Dalton:

Emily, Carl, we will leave it right there. Great information today, some great insights, thanks so much for taking some time. And this issue's not going away, I have a hunch we're going to be talking again probably real soon, so thanks so much.

Emily Tait:

Thank you.

Carl Kukkonen:

Always great chatting with you.

Dave Dalton:

For complete biographies and contact information for Emily and Carl, visit jonesday.com, and while you're there, check out our insights page, more podcasts, publications, blogs, videos, and other timely content I think you'll find interesting. Subscribe to Jones Day Talks wherever you find your podcast programming. Jones Day Talks is produced by Tom Kondilas. As always, we thank you very much for listening, I'm Dave Dalton, we'll talk to you next time.

Speaker 3:

Thank you for listening to Jones Day Talks. Comments heard on Jones Day Talks should not be construed as legal advice regarding any specific facts or circumstances. The opinions expressed on Jones Day talks. Are those of lawyers appearing on the program, and do not necessarily reflect those of the firm. For more information, please visit jonesday.com.

Insights by Jones Day should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information purposes only and may not be quoted or referred to in any other publication or proceeding without the prior written consent of the Firm, to be given or withheld at our discretion. To request permission to reprint or reuse any of our Insights, please use our “Contact Us” form, which can be found on our website at www.jonesday.com. This Insight is not intended to create, and neither publication nor receipt of it constitutes, an attorney-client relationship. The views set forth herein are the personal views of the authors and do not necessarily reflect those of the Firm.