Can Artificial Intelligence Platforms be Held Liable for Defamation?
By: Sydney Coker
On May 4, 2023, journalist Fred Riehl was conducting research on a lawsuit, The Second Amendment Foundation v. Robert Ferguson, using an artificial intelligence chatbot, ChatGPT.[1] In his interaction with ChatGPT, Riehl provided the URL of a link to the complaint filed by The Second Amendment Foundation and asked ChatGPT to provide “a summary of the accusations in the complaint.”[2] In response, the chatbot produced a number of false allegations claimed to be made by the Second Amendment Foundation against Mark Walters, an individual who is neither a plaintiff nor defendant in the lawsuit.[3]
Specifically, the chatbot provided that the Second Amendment Foundation was suing Walters for “defrauding and embezzling funds from the organization as its treasurer and chief financial officer.”[4] Walters, however, holds no position in the Second Amendment Foundation and has never served as its treasurer or CFO.[5] In addition to these false allegations, when Riehl asked Chat GPT for a copy of the complaint, the chatbot produced the text of the fictional lawsuit, including a fake case number.[6] Riehl informed Walters of these false allegations produced by ChatGPT, and Walters filed a lawsuit against OpenAI, the parent company of ChatGPT, claiming that the information produced by the chatbot was “libelous” and harmful to his reputation.[7]
This lawsuit raises the question: should artificial intelligence programs be held liable for defamation, based on their programs’ output? This legal issue must be considered in the context of current U.S. defamation rules, particularly 47 U.S.C. § 230, which differentiates between online platforms that publish content produced by other sources and those that produce original content.[8] According to the statute, platforms falling into the second group can be held liable for any defamatory content they produce while platforms falling into the first group are granted immunity from tort liability.[9]
In determining whether artificial intelligence programs can be held liable for defamatory content they produce, it must first be determined whether the content produced by these programs is more analogous to publishing content produced by other sources or producing original content. Under the current U.S. defamation rules, a number of online platforms falling into the second group, including a host of search engines and social media websites, have been granted immunity under 47 U.S.C. § 230,[10] which states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[11] However, unlike search engines and social medial websites that simply publish information originating from another source, output by artificial intelligence platforms is created by the platform.[12] When responding to a prompt, artificial intelligence generators produce original answers by piecing together words “based on word frequency connections drawn from sources in the training data.”[13] Even though the output produced by these generators draws on information found in existing sources, artificial intelligence platforms are responsible for the way in which these words are assembled in their output, and they can produce inaccurate, defamatory content, as was the case in the present lawsuit.[14] Therefore, it is unlikely that content produced by ChatGPT and other artificial intelligence platforms will be protected under U.S. defamation law.[15]
[1] Michelle Cheng, OpenAI has been sued for libel over hallucinations by ChatGPT, quartz, (June 7, 2023) https://qz.com/libel-lawsuit-openai-artificial-intelligence-1850516327.
[2] Complaint at 2, Walters v. OpenAI, L.L.C., 1:23-cv-03122, (N.D. Ga. July 14, 2023).
[3] Miles Klee, ChatGPT Is Making Up Lies – Now it’s Being Sued for Defamation, rolling stone, (June 9, 2023) https://www.rollingstone.com/culture/culture-features/chatgpt-defamation-lawsuit-openai-1234766693/.
[4] Complaint at 3, Walters v. OpenAI, L.L.C., 1:23-cv-03122, (N.D. Ga. July 14, 2023).
[5] Id.
[6] Miles Klee, ChatGPT Is Making Up Lies – Now it’s Being Sued for Defamation, rolling stone, (June 9, 2023) https://www.rollingstone.com/culture/culture-features/chatgpt-defamation-lawsuit-openai-1234766693/.
[7] Complaint at 1,4, Walters v. OpenAI, L.L.C., 1:23-cv-03122, (N.D. Ga. July 14, 2023).
[8] Michelle Cheng, OpenAI has been sued for libel over hallucinations by ChatGPT, quartz, (June 7, 2023) https://qz.com/libel-lawsuit-openai-artificial-intelligence-1850516327.
[9] Eugene Volokh, Large Libel Models? Liability for AI Output, 3 J. Free Speech L. 489, 495 (2023).
[10] See Klayman v. Zuckerberg, 753 F. 3d 1354, 1360 (D.C. Cir. 2014); Doe v. MySpace, Inc., 474 F. Supp. 2d 843, 847 (W.D. Tex. 2007); Coffee v. Google, LLC, No. 20CV03901, 2021 U.S. Dist. LEXIS 26750, at *13 (N.D. Cal. Feb. 10, 2021).
[11] 47 U.S.C. § 230
[12] Volokh, supra note 9 at 496.
[13] Id.
[14] Id. at 495.
[15] Id. 494.
Image Source: https://www.geeksforgeeks.org/what-is-artificial-intelligence/