Unlike states, social media companies – indeed, our new governors – only have voluntary commitments to International Human Rights Law (IHRL). However, once IHRL becomes the common language of content moderation, it is not unconceivable that binding IHRL standards can be also developed by private bodies, Meta’s nascent standard on risk of incitement being a case in point. What is the substance of the new standard and why is this important?
This piece traces the evolution of restrictions on inciting speech starting with the US constitutional law case of Brandenburg v. Ohio through IHRL and the decisions of the Meta Oversight Board in two more recent controversies – that of the Suspension of then Former President Trump’s Facebook account and Depiction of Zwarte Piet. The analysis shows that there is a divergence in how incitement is treated in the offline and the online world. So far, the online restrictions on inciting speech have been broader than the offline, limiting harmful speech and defending the rights of others. However, it remains to be seen whether the Meta Oversight Board will continue to enforce a harm-preventive standard in the new political context prompted by the Trump administration. The recent debate prompted by Meta’s changes of content moderation practices has focused on the replacement of fact checking with community notes and enforcement changes. However, what might be also at stake is a novel substantive standard on inciting speech with potential purchase for the offline world.
1. The Clear and Present Danger Limitation and Brandenburg v. Ohio
Although it requires a very high threshold to be limited, freedom of speech in the American constitutional tradition is not an absolute right. Enshrined in the Brandenburg v. Ohio test, one of the most prominent limitations is on inciting speech. The standard of incitement set by the US Supreme Court in Brandenburg v. Ohio is the culmination of fifty years of separate opinions. In United States v. Schenck, the US Supreme Court introduced the clear and present danger test for the first time. However, the test was best articulated later in Justice Holmes’s dissent in Abrams v. United States, where the Justice stated that “only the present danger of immediate evil or an intent to bring it about” should allow Congress to restrict free speech. In his concurring opinion in Whitney v. California, Justice Brandeis clarified that “…even advocacy of violation, however reprehensible morally, is not a justification for denying free speech where the advocacy falls short of incitement and there is nothing to indicate that the advocacy would be immediately acted on. The wide differencebetween advocacy and incitement, between preparation and attempt, between assembling and conspiracy, must all be borne in mind. In order to support a finding of clear and present danger, it must be shown either that immediate serious violence was to be expected or was advocated, or that the past conduct furnished reason to believe that such advocacy was then contemplated.”
The emphasis is on the imminence of lawless action that leads to a distinction between mere advocacy of such action and direct incitement. Decided in1969 and still good law, this is what Brandenburg v. Ohio stands for. The Court’s dicta are encapsulated in a few sentences: “[T]he constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe (1) advocacy of the use of force or of law violation, (2) except where such advocacy is directed to inciting or producing imminent lawless action, and (3) is likely to incite or produce such action.” The test is highly protective of free speech, generally allowing advocacy and proscribing speech only when the three prongs of intent, imminence and likelihood are met.
Commentators have observed that the high protection that the KKK leader’s speech received in Brandeburg, and more generally, the high bar of incitement set out there has arguably not helped protect political dissent but has given rise to “reckless speech”.
2. IHRL
By contrast, IHRL assesses speech restrictions based on a tripartite test that includes: 1. Legality – The restriction must be based on a clear and precise law. 2. Legitimate Aim – The measure must protect the rights of others, security, or public order. 3. Necessity & Proportionality – It must be the least restrictive means to achieve the aim. The right to freedom of expression under IHRL is also explicitly limited in the case of incitement, with Article 20.2 of the International Covenant on Civil and Political Rights (ICCPR) stating that “any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law”. Incitement is defined in a broader framework known as the Rabat Plan of Action, which evaluates speech based on six factors: 1) social and political context of the speech, 2) the status of the speaker, 3) the intent of the speaker (where negligence and recklessness would not suffice), 4) the content and form of the speech, 5) the reach of the speech; and 6) the likelihood of harm, including its imminence.
Brandenburg’s indirect influence can be traced to a UN report specifying that under international law speech cannot be restricted unless it can be demonstrated that (a) the expression is intended to incite imminent violence; (b) it is likely to incite such violence; and (c) there is a direct and immediate connection between the expression and the likelihood or occurrence of such violence. However, although also highly protective of freedom of expression, compared to US constitutional law the international law framing purports to balance freedom of expression with other considerations, thereby allowing for restrictions in a wider pool of cases.
3. Meta Oversight Board’s new online standard of risk of incitement
Both the American constitutional law and the IHRL structures for restricting speech were elaborated for the offline world. How do these standards translate to online speech? The Meta Oversight Board, a quasi-judicial, quasi-advisory body that sits at the top of the content moderation hierarchy at Meta, provides an example on developing online speech standards.
The Meta Oversight Board upheld Facebook’s decision, on 7 January 2021, to restrict then-President Donald Trump’s access to posting content on his Facebook Page and Instagram account. The Board applied the Rabat Plan factors to its risk assessment of incitement, closely following IHRL. However, the case failed to clarify the Board’s final stance on incitement since the suspension of Trump’s account was based on the Dangerous Individuals and Organizations Community Standard rather than the Violence and Incitement Standard. A minority of the Board would have found the threshold of incitement satisfied, “since read in context, the posts stating the election was being “stolen from us” and “so unceremoniously viciously stripped,” coupled with praise of the rioters, qualifies as “calls for actions,” “advocating for violence” and “misinformation and unverifiable rumors that contribute[d]to the risk of imminent violence or physical harm” prohibited by [Meta’s] Violence and Incitement Community Standard.” In particular, the minority suggested assessing incitement in the light of Mr. Trump’s use of Facebook’s platforms also prior to the November 2020 presidential election. Arguably, the views of the minority would have been uncontroversial in the offline word under both First Amendment and IHRL and President Trump’s speech would have been found inciting.
Be that as it may, the Board already signalled the advent of a more relaxed approach to incitement in discussing the necessity and proportionality of the suspension. It stated that instead of outright speech bans, Facebook might look into developing other mechanisms that prevent the amplification of speech which “poses risks of imminent violence, discrimination, or other lawless action”. Therefore, speech that does not reach the threshold of incitement but presents elements of risk to incitement might also be punished. Moreover, the Board required that Facebook reviews the time-bound suspension of Trump’s account and extends it in case it was to find “a serious risk of inciting imminent violence, discrimination or other lawless action”.
In another case, Depiction of Zwarte Piet, the Board had the opportunity to clarify that risk of incitement can amount to posts being taken down. In that decision, the Board upheld Facebook’s decision to take down speech which violated the protected characteristic of race and ethnicity. When discussing the necessity and proportionality of the ban on blackface, the Board was concerned with “the accumulation of degrading caricatures of Black people on Facebook creat[ing]an environment where acts of violence are more likely to be tolerated and reproduce discrimination in a society.“ In its necessity and proportionality analysis moreover, the Board quoted the documented experience of discrimination and violence of Black people in the Netherlands that were connected to the practice of Zwarte Piet. It “noted reported episodes of intimidation and violence against people peacefully protesting Zwarte Piet”. All this to say that although for the Board the depiction of Zwarte Piet did not rise to the level of incitement offline, it did contain a risk of incitement. In Depiction of Zwarte Piet, moreover, the Board relied on a broader notion of harm compared to the one espoused in the Rabat Plan of Action and, unlike in the Rabat Plan, didn’t require intent to obtain for the content to be removed. For example, it stated that: “[a]majority found that allowing such posts to accumulate on Facebook would help create a discriminatory environment for Black people that would be degrading and harassing.” With that, the Board pointed to the long-term societal consequences of the speech, not its real or potential possibility for imminent harm. This interpretation differs from the definition of direct incitement that leads to imminent harm, as described in paragraph 21 of the Rabat Plan of Action.
Similarly, in a string of cases on hate speech (Armenians in Azerbaijan, Alleged Crimes in Raya Kobo, Knin Cartoon, South Africa slurs, Holocaust denial) the Board removed speech even though it stated that the offline standard of incitement under IHRL might not be fulfilled.
4. Conclusion
The former UN Special Rapporteur on freedom of expression, Professor David Kaye, has poignantly stated whether based on strict criteria that protect free speech “…[s]tates [too]may restrict advocacy of hatred that does not constitute incitement to discrimination, hostility or violence.” Indeed, adopting a single standard for the offline and the online environment has been favored with the advent of the German Network Enforcement Act. Since what is illegal offline should be illegal also online, why not make what is illegal online also illegal offline?
[1] Dr Bilyana Petkova is the Principal Investigator for a project on freedom of speech at the Meta Oversight Board based at the University of National and World Economy in Sofia, Bulgaria. She is also an Affiliate Scholar at the Yale Information Society Project.