Fair Use in the Age of AI: Navigating Copyright When Using ChatGPT and AI Tools in the Classroom

The integration of artificial intelligence tools like ChatGPT into educational settings has created both opportunities and legal complexities for teachers, students, and institutions. At the core of these challenges lies a fundamental question: when educators and students use AI tools in classrooms, how do fair use protections apply, and what copyright obligations remain? The answer requires understanding how traditional copyright doctrine intersects with emerging AI technologies, a landscape that continues to evolve through recent court decisions and regulatory guidance.

The Foundation: What Is Fair Use?

Fair use is a doctrine under U.S. copyright law that permits limited use of copyrighted material without permission from rights holders, typically for purposes such as criticism, comment, news reporting, teaching, scholarship, or research. This doctrine exists specifically to advance forms of work and communication judged to have overriding societal benefit without causing significant harm to copyright holders.​

Courts evaluate fair use by analyzing four key factors:​

The purpose and character of the use examines whether material has been transformed by adding new expression or meaning—whether it creates something new rather than merely copying verbatim. This transformative aspect has become increasingly important in recent copyright cases. The nature of the copyrighted work considers whether the original work is creative or more factual in nature. The amount and substantiality of the portion taken evaluates how much of the original work was used and whether that amount was necessary for the intended purpose. Finally, the effect on potential market value determines whether the use harms the original work’s market or creates competition with it.​

Critically, courts have emphasized that no single factor is determinative; rather, judges balance all four factors in context, giving particular weight to market harm. A finding of fair use is not guaranteed—it remains one of the most unpredictable areas of copyright law, ultimately decided on a case-by-case basis through litigation.​

How AI Training and Fair Use Intersect: Recent Court Decisions

Recent federal court rulings provide significant guidance on whether using copyrighted works to train AI models constitutes fair use. In Bartz v. Anthropic (June 2025), U.S. District Judge William Alsup delivered a landmark decision holding that using copyrighted books to train large language models is “exceedingly transformative” and therefore protected fair use. The court reasoned that training AI, like reading a book to improve one’s own writing, represents a fundamentally transformative process where the system learns from works without reproducing or distributing them.​

However, this decision contained important limitations. Judge Alsup explicitly distinguished between legally acquired copyrighted works (fair use) and pirated books (not fair use). The court found that Anthropic’s deliberate downloading of millions of pirated books to build a permanent training library violated copyright law, even though the fair use doctrine protected training on legally obtained materials. This distinction matters significantly for educational contexts, as institutions must ensure they are using only legally licensed or appropriately sourced materials when training or implementing AI systems.​

The court’s reasoning reflects established legal precedent from cases like Authors Guild v. Google (2015) and Authors Guild v. HathiTrust (2014), which found that mass digitization of copyrighted books to create searchable databases and reveal new information constituted fair use.​

AI-Generated Content and Copyright Ownership: A Complex Picture

One of the most confusing aspects for educators is determining who owns the copyright in AI-generated content. Under current U.S. law, copyright protects only works created by humans. The U.S. Copyright Office’s January 2025 report affirmed that works generated solely by AI lack the human authorship necessary for copyright protection. This means neither ChatGPT nor any AI system can hold copyright over its outputs.​

Consequently, the copyright for AI-generated content generally belongs to either the user or the AI company, depending on the terms of service. OpenAI’s terms typically grant users ownership of outputs they generate from ChatGPT, though existing copyright laws still apply if those outputs closely imitate protected works.​

This creates a paradox for classroom use: if a student uses ChatGPT to generate an essay and AI-generated content itself cannot be copyrighted, students might assume their assignment is free from copyright concerns. However, this reasoning is legally incorrect. If ChatGPT’s output incorporates copyrighted material from its training data—either paraphrasing or substantively deriving from existing works—the student’s work could still violate copyright law through infringement, even if the AI output itself lacks copyright protection.​

Copyright Issues When Using AI in Classroom Settings

Educators face three primary copyright concerns when incorporating AI tools:

1. Using Copyrighted Material as AI Input

When teachers prompt AI tools using copyrighted material—such as asking ChatGPT to create a modified version of a published article or story—this raises copyright questions. The Australian Copyright Agency framework indicates that including copyrighted material in an AI prompt can create compliance issues, though educational statutory licenses may provide some protection. Teachers operating under education-specific copyright licenses, such as the Education Statutory Licence in Australia or similar provisions elsewhere, may have some latitude if the use is solely for educational purposes and does not unreasonably prejudice copyright owners’ interests.​

In the United States, teachers might argue that such use falls under fair use when it is transformative, limited in scope, and directly related to classroom instruction. However, this remains fact-specific and not automatically protected.​

2. AI Outputs Containing Copyrighted Material

Even more problematically, AI tools may generate outputs that contain or closely replicate copyrighted material from their training data, without proper attribution or permission. If teachers distribute such content to students without verification, they may expose both themselves and the institution to liability. Schools should not republish third-party material generated by AI tools in reliance on copyright exceptions unless they can verify the original source and confirm that use complies with copyright law.​

3. Responsibility for Accuracy and Verification

Teachers and students who use AI-generated content bear legal responsibility for ensuring that content does not infringe copyright or contain misinformation. Even when AI use is permitted, instructors should require students to verify the accuracy of all citations and references included in AI-generated work. This responsibility cannot be delegated to the AI system.​

Academic Integrity: A Distinct Concern From Copyright

It is crucial to distinguish between copyright violations (a legal matter) and academic integrity violations (an institutional policy matter). While related, these are separate concerns requiring different approaches.

Academic integrity policies focus on unauthorized use of AI tools, plagiarism, and failure to disclose AI assistance. Many institutions treat the use of AI to complete assignments without permission as cheating, regardless of copyright considerations. The key principle across educational institutions is transparency and disclosure.​

Current best practices require that:

  • Clear policies must be communicated in syllabi, assignment instructions, and classroom discussions about what AI use is permitted​
  • Students must disclose any AI use and document exactly which tools they used, including model names, dates, and prompts when required​
  • Attribution should follow established citation formats, such as: “ChatGPT-3. (YYYY, Month DD of query). ‘Text of your query.’ Generated using OpenAI. https://chat.openai.com/”[13]
  • Limitations must be specified—for example, stating that AI may be used for brainstorming but not for generating entire essays​

Three Institutional Policy Approaches

Research on educational fairness identifies three distinct policy models:​

Explicitly forbid AI use is the most restrictive approach but reduces ambiguity about student expectations and potential penalties.

No explicit policy creates dangerous uncertainty where different students interpret permissions differently, potentially disadvantaging students from less privileged backgrounds who may be less familiar with technology or less likely to take risks with tools they believe are forbidden.

Explicitly permit AI use with clear guidelines is considered the fairest approach from an equity perspective, as it provides transparency about what is accepted, how students should document use, and what practical consequences their technology choices have.​

Legal Compliance and Data Protection: Beyond Copyright

Schools must consider laws beyond copyright when using AI tools in classrooms. Key federal protections include:

FERPA (Family Educational Rights and Privacy Act) restricts how schools handle student educational records and personally identifiable information (PII). Schools cannot feed student data into any AI tool without ensuring FERPA compliance and obtaining proper parental consent where required. A March 2025 class action lawsuit addressed this exact concern, highlighting institutional vulnerability to liability.​

COPPA (Children’s Online Privacy Protection Act) similarly protects children’s data online, requiring parental consent before collecting information from students under 13.​

Schools should look for AI platforms with clear compliance documentation, third-party audits, and explicit policies against reselling student data. Platforms should allow educators to preview, approve, or block AI-generated content before it reaches students.​

Developing a Comprehensive Classroom AI Policy

Effective policies address multiple dimensions. The framework for generative AI in education suggests schools consider:​

  • Data privacy and security: How is student information protected?
  • Access and equity: Do all students have equal access to AI tools?
  • Academic integrity: What constitutes appropriate use?
  • Citations and references: How should AI-generated material be cited?
  • Assessment design: Are assessments structured to reduce inappropriate AI reliance?
  • Professional development: Are teachers trained on AI capabilities and limitations?

Schools should update existing acceptable use agreements to specifically address AI tools. Policies should include age-appropriate definitions of academic integrity, cheating, and plagiarism in the AI context, provide examples of misconduct using AI tools, and clearly outline consequences.​

Practical Implementation Recommendations

For Educators

Teachers should adopt several concrete practices: Provide explicit guidance on syllabi specifying which assignments permit AI use, which tools are permitted, and what documentation is required. Teach students to verify all information and citations generated by AI, treating AI as a tool whose outputs require human judgment and fact-checking. Design assessment strategies that reduce scope for uncritical AI use—for example, requiring oral presentations, in-class writing, or assignments asking students to critique and analyze AI outputs rather than simply using them to generate work.​

Ask students to explain their work verbally or through follow-up conversations, which can reveal whether students actually understand the concepts or merely used AI to generate answers. Audit AI-generated content before sharing with students, checking for factual accuracy, potential bias, representation, and whether material contains inadvertent copyright infringement.​

For Students

Students should understand that AI tools are not infallible. ChatGPT and similar systems frequently generate plausible-sounding but false information, misattribute sources, and sometimes create entirely fictional citations. Students must verify any information before incorporating it into academic work.​

When AI use is permitted, students should document their process, including what tool they used, the exact prompt provided, the date and time of the query, and how they used the output—whether they used it verbatim, paraphrased it, or used it as a starting point for their own analysis.​

For Institutional Leaders

Schools should conduct regular audits of third-party AI tools to ensure compliance with FERPA, COPPA, and other applicable laws. Provide professional development for teachers on both AI capabilities and copyright/academic integrity implications. Create governance structures for approving AI tools and establishing institution-wide standards rather than allowing ad-hoc adoption by individual teachers.

The Evolving Legal Landscape

The legal framework governing AI and copyright in education remains unsettled. The U.S. Copyright Office’s Part 3 report, expected in late 2025, is anticipated to address legal implications of training AI on copyrighted works, licensing requirements, and potential liability—guidance that will likely influence educational institutions’ decisions.​

Courts will continue defining fair use boundaries as more cases proceed to trial. Meanwhile, policymakers in various jurisdictions are developing regulatory frameworks. The European Union’s AI Act, for instance, is calling for greater transparency about AI training data sources.​

Conclusion: Balancing Innovation With Legal Responsibility

Fair use provides genuine protections for educational use of AI tools, particularly when use is transformative, limited in scope, and directly supports learning. The Bartz v. Anthropic decision confirms that training AI on copyrighted works can constitute fair use when materials are legally obtained. However, this legal protection does not eliminate institutional responsibility for ensuring compliance with data protection laws, avoiding distribution of potentially infringing content, and maintaining academic integrity standards.

The most defensible approach involves transparency, clear policies, verification of outputs, proper documentation of AI use, and deliberate assessment design that supports rather than undermines learning outcomes. As AI tools become increasingly embedded in educational practice, institutions that establish clear policies, train faculty and students on appropriate use, and remain vigilant about both copyright law and data protection will be best positioned to maximize AI’s benefits while minimizing legal risk.