This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Asset 3
  • About
  • People
  • Capabilities
  • Insights
  • Careers
  • Public Interest
  • Inclusion
  • Contact us
    Contact us
  • Locations
    Locations
  • Search
    Search
  • About
    • About
    • Message From the CEO
    • Firm History
    • Alumni
    • Alumni
    • In Memoriam
  • People
  • Capabilities
    • Practices
    • Industries
    • Global Reach: The Law Firm Network
    • Bankruptcy & Restructuring
    • Brand & Reputation Management
    • Intellectual Property
    • Litigation & Dispute Resolution
    • Special Situations, Distressed Debt and Debt Trading
    • Transactions
    • Tax
    • White Collar Defense, Investigations & Compliance
    • Energy & Environmental
    • Entertainment & Media
    • Investment Management 
    • Life Sciences
    • Technology
    • Real Estate
    • Bankruptcy & Restructuring
    • Bankruptcy Litigation
    • Mass Torts Bankruptcy
    • Intellectual Property
    • Intellectual Property Litigation
    • Patents
    • Trademark, Copyright & Advertising
    • Patent Trial and Appeals Board (PTAB)
    • Litigation & Dispute Resolution
    • Civil Fraud Litigation
    • Employment Practices and Litigation
    • Government Contracts Litigation
    • Intellectual Property Litigation
    • Insurance Recovery
    • Litigation Funding
    • M&A and Private Equity Litigation
    • Real Estate Litigation
    • Patent Trial and Appeals Board (PTAB)
    • UK Tax Controversy & Litigation
    • Special Situations, Distressed Debt and Debt Trading
    • Distressed Debt & Claims Trading
    • Litigation Funding
    • Finance
    • Real Estate Special Situations
    • Transactions
    • Capital Markets
    • Cross-Border Transactions
    • Emerging Growth Companies & Venture Capital
    • Employment
    • Finance
    • Franchising
    • Mergers & Acquisitions
    • Tax
    • White Collar Defense, Investigations & Compliance
    • Economic Sanctions & Export Controls
    • Energy & Environmental
    • Energy
    • Energy Transition
    • Environmental
    • Entertainment & Media
    • Brand & Reputation Management
    • Intellectual Property
    • Sports
    • Investment Management
    • Fund Formation
    • Private Equity Transactions
    • Distressed Debt
    • Emerging Growth Companies & Venture Capital
    • Family-Owned & Closely Held Businesses
    • Private Equity Litigation
    • Life Sciences
    • BR BioAdvisory Services
    • Technology
    • Artificial Intelligence
    • Cybersecurity & Data Privacy
    • Digital Commerce
    • Fintech
    • Real Estate
    • Hospitality & Leisure
    • Distressed Real Estate
    • Real Estate Special Situations
    • Real Estate Litigation
    • Wireless Network Infrastructure
  • Insights
    • Client News
    • Firm News
    • Briefings
    • Events
  • Careers
    • Experienced Lawyers
    • U.S. Law Students
    • London Trainee Program
    • Business Professionals
    • Professional Development
  • Public Interest
    • Brown Rudnick Charitable Foundation
    • Pro Bono & Community Service
  • Inclusion
    • Inclusion
    • Women in Business Series
  • Contact Us
  • Location
  • Search
  • About
    • About
    • Message From the CEO
    • Firm History
    • Alumni
    • Alumni
    • In Memoriam
  • People
  • Capabilities
    • Practices
    • Industries
    • Global Reach: The Law Firm Network
    • Bankruptcy & Restructuring
    • Brand & Reputation Management
    • Intellectual Property
    • Litigation & Dispute Resolution
    • Special Situations, Distressed Debt and Debt Trading
    • Transactions
    • Tax
    • White Collar Defense, Investigations & Compliance
    • Energy & Environmental
    • Entertainment & Media
    • Investment Management 
    • Life Sciences
    • Technology
    • Real Estate
    • Bankruptcy & Restructuring
    • Bankruptcy Litigation
    • Mass Torts Bankruptcy
    • Intellectual Property
    • Intellectual Property Litigation
    • Patents
    • Trademark, Copyright & Advertising
    • Patent Trial and Appeals Board (PTAB)
    • Litigation & Dispute Resolution
    • Civil Fraud Litigation
    • Employment Practices and Litigation
    • Government Contracts Litigation
    • Intellectual Property Litigation
    • Insurance Recovery
    • Litigation Funding
    • M&A and Private Equity Litigation
    • Real Estate Litigation
    • Patent Trial and Appeals Board (PTAB)
    • UK Tax Controversy & Litigation
    • Special Situations, Distressed Debt and Debt Trading
    • Distressed Debt & Claims Trading
    • Litigation Funding
    • Finance
    • Real Estate Special Situations
    • Transactions
    • Capital Markets
    • Cross-Border Transactions
    • Emerging Growth Companies & Venture Capital
    • Employment
    • Finance
    • Franchising
    • Mergers & Acquisitions
    • Tax
    • White Collar Defense, Investigations & Compliance
    • Economic Sanctions & Export Controls
    • Energy & Environmental
    • Energy
    • Energy Transition
    • Environmental
    • Entertainment & Media
    • Brand & Reputation Management
    • Intellectual Property
    • Sports
    • Investment Management
    • Fund Formation
    • Private Equity Transactions
    • Distressed Debt
    • Emerging Growth Companies & Venture Capital
    • Family-Owned & Closely Held Businesses
    • Private Equity Litigation
    • Life Sciences
    • BR BioAdvisory Services
    • Technology
    • Artificial Intelligence
    • Cybersecurity & Data Privacy
    • Digital Commerce
    • Fintech
    • Real Estate
    • Hospitality & Leisure
    • Distressed Real Estate
    • Real Estate Special Situations
    • Real Estate Litigation
    • Wireless Network Infrastructure
  • Insights
    • Client News
    • Firm News
    • Briefings
    • Events
  • Careers
    • Experienced Lawyers
    • U.S. Law Students
    • London Trainee Program
    • Business Professionals
    • Professional Development
  • Public Interest
    • Brown Rudnick Charitable Foundation
    • Pro Bono & Community Service
  • Inclusion
    • Inclusion
    • Women in Business Series

Search People

Search by last name

A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z

see all people

Asset 3
  • LinkedIn
  • X (formerly known as Twitter)
  • Instagram
  • YouTube
  • Contact Us
  • Terms of Use
  • Privacy
  • Sitemap
  • LinkedIn
  • X (formerly known as Twitter)
  • Instagram
  • YouTube

© 2024 Brown Rudnick LLP. Attorney advertising.

All Rights Reserved.

All Posts Subscribe
print-logo
5/27/2025 9:16:20 PM | 8 minute read

Walters v. OpenAI: A Game-Changing Verdict Reshaping AI, Defamation and Tech's Future

featured image
2
2

Get in touch

Avatar
Erick Robinson
Partner

Get in touch

Avatar
Erick Robinson
Partner
2
2

In a decision that could reshape the legal and technological landscape, the Superior Court of Gwinnett County, Georgia, issued a ruling on May 19, 2025, in Walters v. OpenAI, L.L.C. (Case No. 23-A-04860-2), granting summary judgment in favor of OpenAI, the developer of the artificial intelligence chatbot ChatGPT.

This case, pitting Mark Walters—a nationally recognized radio host and Second Amendment advocate—against the AI pioneer, hinges on a false statement generated by ChatGPT that alleged Walters was involved in embezzlement. The ruling not only dismisses Walters’ defamation claim but also sets a precedent that may influence how AI developers, legal professionals and technologists navigate the intersection of emerging technology and traditional tort law. This article provides an in-depth summary, a comprehensive analysis, and forward-looking insights into the ruling’s implications, offering recommendations for stakeholders in this evolving domain.

Case Summary: The Genesis of the Dispute

The saga began on May 3, 2023, when Frederick Riehl, a journalist and editor of AmmoLand.com—a news and advocacy site focused on Second Amendment rights—and a member of the board of directors of the Second Amendment Foundation (SAF), interacted with ChatGPT. Riehl was researching a lawsuit, SAF v. Ferguson, filed by SAF against the attorney general of Washington, which alleged unconstitutional harassment of the organization due to its political stance on gun rights. Seeking a summary of the publicly available complaint, Riehl pasted text into ChatGPT and later provided a URL link to the document. Initially, ChatGPT accurately summarized the input text. However, upon receiving the URL, the chatbot—despite stating it could not access the internet—produced a fabricated summary, alleging that the lawsuit involved embezzlement by an SAF treasurer and chief financial officer, later identifying this individual as Mark Walters.

Walters, a radio host of two nationally syndicated programs with an estimated 1.2 million listeners per 15-minute segment, is a vocal advocate for gun rights, authoring books and serving as a media spokesperson for SAF. Despite his public profile, he contended in his lawsuit that he was not a public figure, citing limited media appearances (e.g., one Fox Business interview and one local NBC segment). The false ChatGPT output, which Riehl recognized as inaccurate within 90 minutes and did not republish, prompted Walters to sue OpenAI for defamation, alleging harm to his reputation. However, Walters later conceded he suffered no damages. OpenAI moved for summary judgment under Georgia law (O.C.G.A. § 9-11-56), arguing that the statement lacked defamatory meaning, Walters could not prove fault, and no damages were recoverable. The court agreed on all counts, delivering a decisive victory for OpenAI.

Detailed Legal Analysis: The Court’s Rationale

The court’s 22-page order, penned by Judge Tracie Cason, articulates three independent bases for granting summary judgment, each reflecting a meticulous application of defamation law and First Amendment principles. Let’s unpack each ground with depth and nuance.

1. Absence of Defamatory Meaning

Under Georgia law, a statement is defamatory only if it can be reasonably understood by a hypothetical reader as conveying “actual facts” about the plaintiff (Bollea v. World Championship Wrestling, Inc., 271 Ga. App. 555). The court emphasized that context, including disclaimers and the reader’s experience, is critical in this determination. ChatGPT’s output was accompanied by multiple warnings: it could not access the internet, its knowledge cutoff was September 2021 (predating the lawsuit), and its Terms of Use cautioned users about potential inaccuracies or “hallucinations”—a term for AI-generated fabrications. Riehl, an experienced user aware of past fictional outputs, encountered these disclaimers and quickly verified the falsehood, testifying that the chatbot “completely fantasized” the response.

The court relied on precedents like Farah v. Esquire Mag. (736 F.3d 528), which holds that a reasonable reader, after reflection, would not construe such output as factual given the “prominent indicia” of unreliability. Riehl’s access to the actual complaint and SAF press release further undermined any defamatory interpretation. This analysis suggests that AI tools, when transparently labeled as probabilistic, may evade liability if users are expected to exercise due diligence—a novel extension of traditional publisher liability standards.

2. Failure to Establish Fault

Defamation plaintiffs must prove fault, with the threshold varying by the plaintiff’s status. For private individuals, Georgia requires at least ordinary negligence (Am. C.L. Union, Inc. v. Zeh, 312 Ga. 647), while public figures must demonstrate “actual malice”—knowledge of falsity or reckless disregard thereof (New York Times Co. v. Sullivan, 376 U.S. 254). The court classified Walters as a public figure, citing his extensive media presence, large audience and voluntary role in Second Amendment debates. This designation, rooted in Gertz v. Robert Welch, Inc. (418 U.S. 323), reflects his “especial prominence” and access to counter false narratives via his radio platform.

  • Negligence Standard: Walters failed to identify a standard of care or evidence that OpenAI breached it. OpenAI’s expert, Dr. White, testified—unrebutted—that the company leads the industry in reducing hallucinations through data training and human feedback, supplemented by robust user warnings. The court rejected Walters’ argument that deploying a fallible AI constitutes negligence, likening it to strict liability, which Gertz and Georgia law prohibit.
  • Actual Malice Standard: Walters offered no evidence that OpenAI knew the specific output was false or acted with reckless disregard. Dr. White’s testimony highlighted OpenAI’s efforts to mitigate errors, and the court found that general awareness of AI limitations does not meet the “clear and convincing” malice threshold (Jones v. Albany Herald Publ’g Co., 290 Ga. App. 126).

 

This dual failure underscores a high bar for AI defamation claims, particularly for public figures, and suggests that proactive error mitigation and disclosure may shield developers.

3. Inability to Recover Damages

Damages are a cornerstone of defamation claims, and Walters’ case collapsed here. He admitted no actual economic or reputational harm, and alternative damage theories were foreclosed:

  • Punitive Damages: Georgia law (O.C.G.A. § 51-5-2) requires a retraction request before seeking punitive damages, which Walters did not pursue. His argument that a retraction was impractical was deemed irrelevant under Mathis v. Cannon (276 Ga. 16).
  • Presumed Damages: Typically available for defamation per se (e.g., imputing a crime), presumed damages were rebutted by Walters’ own testimony and Riehl’s disbelief in the output. Moreover, Gertz and Zeh prohibit presumed or punitive damages for matters of public concern without actual malice, which Walters could not prove. The court classified the SAF lawsuit as a public issue, distinguishing it from private disputes like the credit report in Dun & Bradstreet, Inc. v. Greenmoss Builders, Inc. (472 U.S. 749).

This damages analysis reinforces First Amendment protections for speech on public issues, even when generated by AI, and limits recovery to provable harm.

Insights: A Crossroads of Law and Technology

This ruling is a watershed moment, blending the rigidity of defamation law with the fluidity of AI innovation. Several insights emerge:

  • AI as a Non-Publisher: The court treats ChatGPT as a tool rather than a traditional publisher, shifting liability to users like Riehl to verify outputs. This echoes historical cases where intermediaries (e.g., bookstores) were not liable for content (Smith v. California, 361 U.S. 147), but extends it to autonomous systems—a legal evolution worth watching.
  • Public Figure Doctrine: The expansive definition of Walters as a public figure, based on niche influence rather than mainstream fame, may broaden this category, affecting activists and commentators. This could insulate AI developers from claims by similar plaintiffs but leave private individuals more vulnerable.
  • Disclaimer Reliance: The ruling’s emphasis on disclaimers suggests a future where legal protection hinges on user education, potentially standardizing such notices across AI platforms. However, this raises ethical questions about whether users can reasonably be expected to navigate AI’s complexities.

The decision also highlights a tension between innovation and accountability. By excusing OpenAI’s errors, it may accelerate AI deployment, but it risks normalizing inaccuracies, challenging the reliability expected of informational tools.

Broader Implications and Future Impacts

The Walters ruling could reverberate across legal, technological and societal spheres:

  • Legal Precedent: As a state court decision, its authority is limited, but it may influence federal courts or other jurisdictions grappling with AI defamation. Expect test cases involving private plaintiffs or less transparent AI systems to refine this precedent. The ruling’s alignment with First Amendment principles may also embolden tech firms to resist liability expansions.
  • Technological Development: AI developers might prioritize transparency (e.g., real-time error flags) and invest in hallucination reduction, though the ruling suggests current efforts suffice legally. This could spur a race to integrate human oversight or hybrid AI-human systems, balancing efficiency with accuracy.
  • Public Trust and Media: If AI errors are legally excusable, trust in tools like ChatGPT for factual reporting may erode, boosting demand for human-verified content. Journalists and publishers could face increased pressure to fact-check AI outputs, reviving traditional editorial roles in a digital age.
  • Regulatory Landscape: The decision may prompt lawmakers to consider AI-specific regulations, such as mandatory accuracy standards or liability thresholds, especially as public concern over misinformation grows. The European Union’s AI Act, with its risk-based approach, offers a potential model, though U.S. free speech norms may resist similar measures.

Long-term, this ruling could shape a bifurcated AI ecosystem: one for entertainment or creative use (where errors are tolerated) and another for factual domains (demanding higher reliability). The balance between fostering innovation and protecting individuals will be a key battleground.

Recommendations for Stakeholders

For Lawyers

  • Potential Plaintiffs: Counsel should meticulously document damages, even intangible ones like emotional distress, to meet the actual harm requirement. Challenging the efficacy of AI disclaimers—arguing they are buried or unclear—could open new legal avenues, especially for private plaintiffs. Staying versed in First Amendment case law and state tort variations will be essential as AI cases proliferate.
  • Potential Defendants: For tech clients, emphasize proactive measures—detailed user warnings, error logs, and mitigation strategies—to bolster negligence defenses. Preemptive legal audits of AI systems could identify vulnerabilities, while crafting Terms of Use to mirror OpenAI’s successful model may limit exposure. Collaboration with technologists to understand AI behavior will enhance credibility with courts.
  • Emerging Strategies: Explore class actions if AI errors affect multiple individuals, or seek injunctive relief to compel corrections, bypassing damages hurdles. Monitoring legislative trends (e.g., AI liability bills) will allow proactive adaptation.

For Technologists

  • Technical Enhancements: Invest in advanced training datasets and real-time validation (e.g., cross-referencing with trusted sources) to minimize hallucinations. Implementing user-facing indicators—color-coded accuracy scores or confidence levels—could empower users and reduce legal risks.
  • User Interface Design: Design intuitive interfaces that highlight data limitations (e.g., knowledge cutoffs, access restrictions) and encourage verification. Gamifying fact-checking or offering tutorial prompts could educate users without overwhelming them.
  • Legal-Tech Collaboration: Partner with legal experts to refine Terms of Use and compliance frameworks, ensuring they withstand judicial scrutiny. Engaging ethicists to address AI’s societal impact could preempt regulatory backlash and build public trust.
  • Industry Standards: Advocate for voluntary AI accuracy benchmarks, potentially through consortia like the Partnership on AI, to self-regulate and avoid draconian laws. Open-sourcing error mitigation techniques could foster collective progress.

For Policymakers and Educators

  • Regulatory Guidance: Develop balanced AI guidelines that protect innovation while addressing misinformation, perhaps through sandbox testing of new models. Public awareness campaigns could educate users on AI’s limitations, reducing unrealistic expectations.
  • Educational Initiatives: Integrate AI literacy into curricula, teaching critical evaluation of digital content. Workshops for journalists and advocates could bridge the gap between technology and traditional fact-finding.

Conclusion: A New Frontier Beckons

The Walters v. OpenAI ruling is a clarion call for the legal and tech communities to adapt to an AI-driven world. By shielding OpenAI from liability, it champions innovation but underscores the need for vigilance. For lawyers, it’s a reminder to evolve strategies beyond analog precedents; for technologists, a challenge to engineer trustworthiness into AI systems. As society navigates this frontier, the balance between free expression, technological progress, and individual rights will define the next chapter of digital governance. This case, while a victory for OpenAI, is merely the opening salvo in a broader dialogue—one that demands collaboration, foresight and a commitment to ensuring AI serves humanity’s best interests.

Sign up to receive our latest BRiefings delivered directly to your inbox. Subscribe

Tags

ai, defamation, openai, brand & reputation management, intellectual property, intellectual property litigation, litigation & dispute resolution, trademark copyright & advertising, artificial intelligence, cybersecurity & data privacy, technology

Get in touch

Avatar
Erick Robinson
Partner

Get in touch

Avatar
Erick Robinson
Partner

Latest Insights

German Court Allows Meta to Use Public Data for AI Training: Implications for Europe’s AI Landscape
5/27/2025 3:16:48 PM

German Court Allows Meta to Use Public Data for AI Training: Implications for Europe’s AI Landscape

By Erick Robinson
2
2
NY Court of Appeals Applies Internal-Affairs Doctrine to Standing in Derivative Actions on Behalf of Foreign Corporations
5/21/2025 8:19:59 PM

NY Court of Appeals Applies Internal-Affairs Doctrine to Standing in Derivative Actions on Behalf of Foreign Corporations

By Jonathan Richman
1
25
26
DOJ Updates White-Collar Enforcement Priorities
5/15/2025 8:37:21 PM

DOJ Updates White-Collar Enforcement Priorities

By Daniel Sachs Steven Tyrrell Stephen Best Angela Papalaskaris +1 more...

Show less

11
11