AI Lawyer Blog

AI-Generated Works: Where US Copyright Draws the Line

Greg Mitchell | Legal consultant at AI Lawyer

3

minutes to read

Downloaded 2898 times

Table of content:

Label

On March 2, 2026, the US Supreme Court declined to hear Stephen Thaler’s petition in Thaler v. Perlmutter (No. 25-449), reflected in the Court’s order list. This is not a new Supreme Court opinion about AI. It is a denial of certiorari, which leaves the lower court ruling in place.

What remains controlling is the D.C. Circuit’s March 18, 2025 decision in Thaler v. Perlmutter, including the court’s core statement that copyrightable work must be authored “in the first instance” by a human being.

The business problem is not “Can we use AI?” You can. The problem is what you can own and enforce when a system generates most of the expression. The US Copyright Office’s 2023 guidance gives a practical test: did a human create the “traditional elements of authorship,” or did the system determine them? The Office’s 2025 copyrightability report adds the point many teams still underestimate: prompting, even extensive prompting, usually does not establish authorship in the output because it often does not show human control over the final expressive choices.

Here is the simplest framing for stakeholders:

  • AI-generated output: the system determines the expressive elements. Higher risk the output itself is not protectable.

  • AI-assisted work: a human authors expression that remains identifiable in the final asset (text, edits, selection, arrangement, compositing). Better odds of a protectable human layer.

AI use does not automatically destroy copyright, but it can create a copyright gap when the valuable expression is mostly machine-determined. Build deliverables where the human-authored layer is real, documentable, and claimable.

United States Supreme Court Building in Washington, D.C.



What Thaler actually decided and what it did not


On March 2, 2026, the Supreme Court declined to review Thaler’s petition, as shown on the Court’s order list. That does not create a new Supreme Court rule. It leaves the decision below in place.

The facts mattered. In the record described by the D.C. Circuit, Thaler’s registration application for A Recent Entrance to Paradise listed the “Creativity Machine” as the sole author and described the work as created autonomously by a machine. Thaler listed himself as owner, not author, in the application discussed in the D.C. Circuit opinion. So this case tested an AI-only authorship claim, not a hybrid workflow.

The D.C. Circuit affirmed the refusal of registration in Thaler v. Perlmutter. The court’s core point is straightforward: copyrightable work must be authored “in the first instance” by a human being, so a machine cannot be the recognized author of a copyrighted work. That is consistent with the Copyright Office’s framework in its 2023 guidance.

The decision is also easy to overread. It does not say that “AI use destroys copyright.” It addresses what happens when a work is presented as authored exclusively by a machine. It also does not resolve every question about mixed workflows where humans write, edit, curate, design, arrange, or transform output.

If your workflow lets AI determine most of the final expression, your copyright position will often be thinner than stakeholders expect. Design and document the human-authored layer on purpose.



The human authorship line in plain English


US copyright does not turn on whether AI was used. It turns on whether a human authored the protectable expression in the final work. The US Copyright Office frames this as a practical test in its 2023 AI registration guidance: did a person create the “traditional elements of authorship,” or did the system determine them?

Ideas, goals, and instructions are not enough. Copyright protects original expression, not effort, and not the fact that you asked for a certain style.

Prompting is where many teams get stuck. The Copyright Office explains in its 2025 copyrightability report that even repeated prompt iteration usually does not establish authorship in the output because it often does not show human control over the final expressive choices. If the system is deciding the composition, the details, the wording, and the visual elements, the system is supplying the expression that copyright law cares about most.

Human authorship looks different. It shows up in work product you can point to, describe, and preserve.

Common examples that strengthen the human layer:

  • Human-written copy, scripts, captions, and narration

  • Meaningful human editing and rewriting that changes expression, not just cleanup

  • Human compositing, retouching, or revision that adds original expression

  • Human selection, coordination, and arrangement of components into a coherent work, consistent with how the Office analyzed the compilation aspects in Zarya of the Dawn

Common examples that usually do not carry the claim by themselves:

  • “We wrote great prompts”

  • “We picked the best output”

  • “We chose a style slider value”

  • “We decided the theme and vibe”

The Copyright Office’s reasoning in Théâtre D’opéra Spatial and SURYAST illustrates the same point: if the tool determines the expressive result, you cannot claim authorship in that AI-generated layer.

If the AI layer is doing most of the expressive work, plan to claim the human layer only, and keep proof of it.



Where the threshold may still be met


The current US approach is not “no copyright if AI was used.” It is closer to “separate the layers.” The US Copyright Office has said it has registered works that include AI-generated material while limiting protection to the human-authored portions, consistent with its 2023 guidance and its 2025 copyrightability report.

The question is whether the human contribution is identifiable in the final work.

The easiest way to see how this works is to look at the Office’s published decisions.

  • In Zarya of the Dawn, the Office treated the human-written text and the selection, coordination, and arrangement of the overall work as protectable, while refusing to treat the individual AI-generated images as human-authored. This is a common business pattern: the package can be protectable even if some embedded AI visuals are not.

  • In Théâtre D’opéra Spatial, the applicant described extensive prompt iteration. The Review Board still concluded that prompting did not make the user the author of the Midjourney image because the system determined the expressive result. The Board also indicated that human edits might matter in the right case, but the claim still had to exclude AI-generated material. Prompt effort is not the same thing as authorship.

  • In SURYAST, the applicant selected a base image, a style image, and a style-transfer strength using the RAGHAV app. The Review Board concluded the tool still determined the expressive outcome, and it treated the user’s control as too limited to support authorship in the output. Calling AI “assistive” does not help if the system is still deciding the expression.

One table is useful here because it compresses the practical consequences into something teams can scan.

Asset pattern

What is often protectable

What is often weak or excluded

Human-written article with AI used for brainstorming

Human-written text and edits

AI-generated passages not meaningfully rewritten

Marketing page or deck using AI visuals

Human copy plus human selection, coordination, and arrangement

Standalone AI-generated images

Prompt-to-final illustration with minimal edits

Possibly the surrounding compilation layer

The generated image itself

Style-transfer output with limited user control

Sometimes human edits if substantial

Output determined by the tool

If your best evidence of “authorship” is a prompt log plus a final export, your copyright position is usually thinner than stakeholders expect.



Business consequences: the copyright gap


This is where the rule becomes expensive. Companies commission creative work because they expect exclusivity. If the most valuable expressive layer is AI-generated, exclusivity can be weaker than stakeholders assume.

You can pay for a deliverable and still have limited copyright leverage over the output.

The gap shows up in four places.

Ownership assumptions break. Teams often say “we own it” when they mean “we paid for it.” Payment and file delivery can be true, while copyright protection for the AI-generated layer is still thin under the human authorship baseline reinforced by Thaler and explained in the Copyright Office’s 2023 guidance.

Contracts overpromise. Template clauses like “all deliverables are original,” “all IP is assigned,” and “vendor warrants full ownership” can become inaccurate if a deliverable relies heavily on AI-generated material. The Office’s analysis in Théâtre D’opéra Spatial and SURYAST is a warning against treating prompting, slider choices, or general direction as authorship.

Registration becomes a workflow problem. The Copyright Office expects disclosure and accurate description of the human contribution, as described in the 2023 guidance. If a team cannot separate human-authored content from AI-generated content, filing becomes riskier and slower, and the resulting registration may be narrower than the business expects.

Enforcement strategy changes. If copyright in the raw AI layer is thin, enforcement tends to depend on what is clearly human-authored, including the text, edits, and arrangement layer highlighted in Zarya of the Dawn. In many disputes, the strongest position may be the protectable compilation and human-authored copy, not exclusive rights in each generated image.

If a competitor copies the AI-generated layer, you may have less leverage than you expect unless the human-authored layer is substantial and well documented.

Judge’s gavel on a computer keyboard.



Practical playbook for companies and creators


Teams do not need philosophical clarity. They need a workflow that produces assets they can register, license, and defend. The Copyright Office’s 2023 guidance and its 2025 report point to the same operational approach: identify the human-authored layer, keep proof of it, and claim it accurately.

Start with documentation. Keep the material that shows what a human actually authored.

  • drafts and redlines for text

  • tracked changes, comments, and revision history

  • layered design files

  • compositing and retouching steps

  • notes on what was added by a human versus generated by a system

Prompt logs can still be useful as process evidence, but they are rarely sufficient on their own. The Office’s analysis in the 2025 report explains why: prompting often does not show control over final expression.

Next, separate the layers in the deliverable. Many teams collapse everything into one file and one story. That makes registration and enforcement harder. The “layering” approach in Zarya of the Dawn is a practical model. Treat the final asset as a bundle of components, then identify what is human-authored.

  • human-written copy

  • human-edited text

  • human-created arrangement, sequencing, and layout

  • AI-generated images or passages

  • human edits to AI output, if they add original expression

Registration is where this discipline pays off. The 2023 guidance expects disclosure of AI-generated content and an accurate description of the human contribution. If AI-generated portions are more than trivial, be prepared to exclude or disclaim them and claim only the human-authored material. This is exactly the kind of issue that derailed the claim in Théâtre D’opéra Spatial.

Contracts should follow the same logic. Many disputes are not about whether a company can use an asset today. They are about whether the company has clean rights when a campaign scales, a partner licenses the work, or a competitor copies it. Upgrade clauses so they reflect reality in AI workflows.

  • require disclosure of AI tools used and which elements are AI-generated

  • define what is being assigned with specificity (human-authored text, edits, arrangement)

  • require delivery of source and layered files to preserve evidence of human authorship

  • include broad usage rights even if parts of the output are not protectable

Finally, adjust enforcement posture. If copyright in a raw AI layer is thin, the company should rely more on what is clearly human-authored and on adjacent protections where available, including trademarks, confidentiality, trade secret controls for workflows, and contract restrictions. The goal is not to avoid AI. The goal is to avoid building valuable assets on a layer you cannot realistically control.

Build deliverables where the human-authored layer is clear, provable, and contractually supported.



Conclusion


The Supreme Court’s March 2, 2026 denial of review in Thaler v. Perlmutter did not create a new AI rule. It left in place the D.C. Circuit’s decision in Thaler v. Perlmutter, and the practical baseline remains the same: US copyright still begins with human authorship.

The Copyright Office’s 2023 guidance and 2025 copyrightability report explain how to operate within that baseline. Purely AI-generated material is typically not protectable as such. AI-assisted works can still qualify, but only for the human-authored parts and only if those parts are accurately described and supported by real evidence.

AI-Generated Works: Where US Copyright Draws the Line
Flash deal

Today

No time to read? AI Lawyer got your back.

What’s Included

Legal Research

Contract Drafting

Document Review

Risk Analytics

Citation Verification

Easy-to-understand jargon

Table of content:

Label

Flash deal

Today

No time to read? AI Lawyer got your back.

What’s Included

Legal Research

Contract Drafting

Document Review

Risk Analytics

Citation Verification

Easy-to-understand jargon

Table of content:

Label

Flash deal

Today

No time to read? AI Lawyer got your back.

What’s Included

Legal Research

Contract Drafting

Document Review

Risk Analytics

Citation Verification

Easy-to-understand jargon

Table of content:

Label

Money back guarantee

Free trial

Cancel anytime

AI Lawyer protects

your rights and wallet

🌐

Company

Learn

Terms

©2026 AI Lawtech Sp. z O.O. All rights reserved.

Money back guarantee

Free trial

Cancel anytime

AI Lawyer protects

your rights and wallet

🌐

Company

Learn

Terms

©2026 AI Lawtech Sp. z O.O. All rights reserved.

Money back guarantee

Free trial

Cancel anytime

AI Lawyer protects

your rights and wallet

🌐

Company

Learn

Terms

©2026 AI Lawtech Sp. z O.O. All rights reserved.

Money back guarantee

Free trial

Cancel anytime

AI Lawyer protects

your rights and wallet

🌐

Company

Learn

Terms

AI Lawtech Sp. z O.O.

©2026