United States Legal Perspective on AI Defamation

0

The digital era has ushered in transformative advancement in technology, especially in the realm of artificial intelligence (AI). While it is undeniable that AI provided numerous benefits to society, it has also raised legal debates regarding accountability and liability, especially in the context of defamation. Defamation, an everlasting concern, has posed newer and more significant legal, social, and ethical challenges for the legislator in recent years. This article will explore the compound of defamation in the digital era, focusing on the question of liability for AI-generated defamation in the United States’ current legal framework.

 

1.    The Identification of Defamation: The Practice of ‘Red Teaming’ on Artificial Intelligence

 

In order to grasp the nuances of defamation in the digital era, it is imperative to firstly identify the different mechanisms that are used to compel AI models to generate harmful speeches. When being trained on extensive datasets, AI models become susceptible to learning biases against demographic groups and other specific communities[1]. Hence, those biases will manifest in the decision-making processes of the AI models and their output, which could have detrimental consequences for specific individuals or vast communities. Furthermore, AI systems have the potential to disseminate such disinformation at an unprecedented scale, posing severe challenges to the assessment of the veracity of information in the current digital age[2]. Notably, AI systems have previously been found to be responsive to harmful instructions, indicating the capacity of artificial intelligence to be manipulated by humans into executing some harmful tasks on a more substantial way[3]. Therefore, it is possible to affirm that the application of defamation standards to the current AI systems is becoming a pressing concern for the legislators[4].

Defamation, at its core, involves the dissemination of false and injurious information regarding an individual. Recent legal cases in the United States, such as Walters v. OpenAI L.L.C. and Battler v. Microsoft Corporation, can serve as exemplars of the evolving landscape of defamation in the digital age[5]. Specifically, these cases highlight the need for a more nuanced legal framework capable of addressing the intricacies of AI-generated defamation effectively.

 

2.    The Application of the Current Legal Framework

 

Within the United States, the current legal framework governing the liability for AI-generated defamation hinges on two pivotal factors: Section 230[6] of the Communication Decency Act and the nature of the legal rule violated by the harmful speech in question generated by artificial intelligence.

 

a)     Section 230 of the Communications Decency Act (1996)

Section 230(c)(1) of the Communications Decency Act, enacted in 1996, establishes that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. Thus, this legal provision extends an immunity to different platforms such as social media and search engines, safeguarding them against the legal penalties arising from harmful content generated by different users[7]. Nonetheless, the application of such immunity to generative AI systems remains a subject debated in the legal world. In fact, AI models fundamentally differ from traditional platforms as they are not able to provide extract data from existing sources but generate text based on linguistic patterns learned during their training and data assessment[8]. This distinction presents new challenges in extending the immunity from Section 230 to generative AI. Since AI models autonomously generate content in response to precise prompts, questions arise regarding their eligibility for immunity.

The applicability of Section 230 to AI-generated content hinges on different factors. Firstly, some baseline models exhibit a propensity to generate outputs grounded in their training data, effectively placing them on a spectrum between a retrieval search engine and a creative engine[9]. Secondly, AI models operating in a few-shot prompt mode, where users provide numerous examples in order to dictate behavior, can imitate content provided by the users, blurring the lines between a platform and content generator. The extent of copying from user-provided text further complicates the immunity analysis[10].

In scenarios where Section 230 does not extend its protection to AI-generated defamation, questions regarding liability become paramount. Deployers of generative AI will potentially face serious liability for hate speech and false defamatory content generated by these AI systems. The determination of whether the artificial intelligence model or its user bears responsibility is contingent upon the nature of the speech in question[11].

 

b)     Ethical Concerns: The Notion of Mens Rea

The scenarios outlined earlier give rise to ethical queries concerning the attribution of liability in the event that artificial intelligence fabricates information, leading to defamation. Indeed, the degree of liability will be contingent on the specific tort or crime under consideration.

In cases of defamation, any entity that publishes false statements about an individual or a group to a third party assumes responsibility for defamation, provided the requisite intent is established[12]. The notion of publication within the concept of defamation law carries a distinct meaning, encompassing any communication of the false statement to a third party other than the plaintiff. While AI, as a sole entity, lacks legal standing and the capacity to be sued, the liable party in such cases is likely to be the company or individuals responsible for deploying the AI system[13].

To reconcile First Amendment concerns with defamation law, courts have required a level of mental awareness in order to hold individuals liable for disseminating false information[14]. When it comes to public figures, the notion of actual malice standard demands knowledge of falsehood or reckless disregard for the truth[15]. In cases involving private citizens, a negligence standard applies, implying that individuals are liable if and only when they were aware or should have been aware of the falsity of the statement[16]. Notably, AI lacks intent or a state of mind, rendering the search for one a futile endeavor. This poses a challenge in determining the liability of AI creators, depending on whether their design constituted negligence or recklessness in causing potential harm.

 

3.    Conclusion

 

Defamation in the digital era presents a multifaceted challenge in discerning liability for AI-generated wrongdoing. To address these complex issues, it is imperative to consider potential safety mechanisms aimed at curbing generative harmful speech. One viable design decision involves compelling AI models to employ only verbatim quotes from existing content, adopting, an “extractive” approach. This approach aligns more closely with features of previously litigated Search Engines and is therefore more likely to receive Section 230 immunity[17].

However, numerous questions remain unanswered, including the liability of AI for defaming private individuals and the standards for establishing a state of mind in liability cases. As technology continues to advance, legal frameworks and ethical considerations must adapt in tandem to ensure accountability and protect individuals from the potential harms of AI-generated defamation. The evolving landscape of AI-generated wrongdoing necessitates ongoing examination and adaptation of legal and ethical standards to preserve justice in the digital age.

[1] E. Bender et al., On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, in proc. of the 2021ACM conf. on fairness, accountability, and transparency, 2021.

[2] J. Goldstein et al., Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations, 2023.

[3] M. Lemley – P. Henderson – T. Hashimoto, Where’s the Liability in Harmful AI Speech?, 2023.

[4] M. Ambrose – B. Ambrose, When Robots Lie: A Comparison of Auto-Defamation Law, in 2014 IEEE international workshop on advanced robotics and its social impacts , 2014, 56–61.

[5] Walters v. OpenAI, L.L.C., 1:23-cv-03122, (N.D. Ga.); Battle v. Microsoft Corporation, 1:2023cv01822.

[6] 47 U.S.C. § 230(c)(1).

[7] Gonzalez v. Google LLC, 143 S. Ct. 1191 (2023).

[8] Lemley, Where’s the Liability in Harmful AI Speech?, cit.

[9] J. Miers, Yes, Section 230 Should Protect ChatGPT And Other Generative AI Tools, 2023..

[10] Ibid.

[11] E. Volokh, The Speech Integral to Criminal Conduct Exception, in Cornell Law review, 2016.

[12] Lemley, Where’s the Liability in Harmful AI Speech?, cit.

[13] Ibid.

[14] Ibid.

[15] New York Times Co. v. Sullivan, 376 U.S. 254, 280 (1964).

[16] Lemley, Where’s the Liability in Harmful AI Speech?, cit.

[17] O’Kroley v. Fastcase Inc., 831 F.3d 352, 355–56 (6th Cir. 2016).

Share this article!
Share.

About Author

Leave A Reply