Trump’s Executive Order: Another Tile in the Mosaic of Governing Online Speech

0

Last week’s Executive Order on Preventing Online Censorship issued by President Trump comes at a tense time in global politics and in the struggle to assert power over social media platforms. It was triggered by Twitter’s move to place fact-checking labels on the President’s tweets relating mail-in ballots to election fraud. Before the adoption of this order, the President clarified his intention to move it from the executive branch to the Congress. Republicans appear to be in favour of regulatory intervention in social media matters, particularly after repeated reports about the silencing of conservative voices though biased online moderation. This political threat aimed at restricting the legal liability protection that allowed platforms to become what they are today is a powerful one because it ultimately touches on the business model of the ‘eyeball’ economy. 

Most media commentators reacting to the Order have so far focused on the attempt to limit the scope of immunity from liability introduced by Section 230(c) of the Communications Decency Act in 1996, pointing out that federal law remains unchanged and the Order is no more than an act of intimidation and politicization of content moderation. In our opinion, there is much more to be said when the order is understood as a tile in the global mosaic of online speech governance. As a first concrete step towards shaking the foundations of the current governance system for free speech online, the Order opens the door for profound legal and political restructuring affecting the Internet and the status of intermediaries in and outside of the US.

A Constitutional Paradox

The order starts from a recognition of the powerful position of a handful of American intermediaries in controlling what the (global) public gets access to. Twitter, Facebook, Instagram, and YouTube are directly named in the Order as platforms that “wield immense, if not unprecedented, power to shape the interpretation of public events; to censor, delete, or disappear information; and to control what people see or do not see”’. The “selective censorship” performed by these tech giants is one of the stated justifications for issuing the order, alongside their double-standard in restricting speech domestically, while “profiting from and promoting the aggression and disinformation spread by foreign governments”. Defining the relationship to the Silicon Valley tech giants in these terms is a clear change of tone that might find wide support, but the means used call for further scrutiny.

As an expression of authority, the executive order has an alarming scope, but it is not grounded in any existing power that the US President has over freedom of expression, precisely because it concerns dissent. Already in 1919, in Abrams v. United States, Justice Holmes had shown us the path taken by the US constitutional protection of the right to free speech, summarized in the liberal metaphor of the “free marketplaces of ideas”. The current debates shed a light on this core tenet: in an open and transparent public sphere, the truth would emerge from the competition of ideas circulating freely.This principle also applies to social media, which were, already in 1997, defined as the new marketplace of ideas by the US Supreme Court in Reno v. ACLU.

Trump’s executive order is a constitutional paradox. It is incongruous not only when looking at the impact over free speech as fundamental right, but also at the established separation of powers between the executive and the Congress, as the former has no power to amend the work of the latter. If the President wanted to change the protections afforded by the Communication Decency Act, issuing an executive order would not be the right way to intervene from a constitutional point of view.

Such an executive order is also incoherent when looking at the legislative inertia of the US Congress as the result of what Michael Birnhack and Niva Elikin Koren would call an “invisible handshake” discouraging regulation. The direction indicated by President Trump goes against the recent judicial orientation of the US Supreme Court to social media. Without mentioning national courts case law like Lewis v. Youtube or PragerU v. YouTube, the US Supreme Court has clarified that social media play a critical role in the digital public sphere being the vast democratic forum of the Internet in Packingham v. North Carolina. The Executive Order cites this case, together with Pruneyard Shopping Center v. Robins, to argue that, although social media platforms are private actors, they provide a public forum online. Indeed, the Supreme Court labelled social media as the ‘modern public square’, but used that to  declare unconstitutional a national law introducing a prior restraint over free speech. This case should have been enough to ban the prior restraint to free speech that the this executive order introduces. Moreover, in a decision from last year, in Manhattan Community Access Corp. v. Halleck, the Supreme Court closed the door to a potential extension of the state action doctrine when it decided that private actors, precisely cable tv companies operating public access channels, do not serve as a public actor (i.e. the city of New York) and are thus not bound to protect free speech rights. 

Within the US constitutional framework, the prohibition to abridge the freedom of speech or of the press in the First Amendment should ensure a safe anchor against disproportionate attacks to liberty and democracy. From an American perspective, such an executive order is preferable to a law of the Congress. President Trump’s order is a political reaction without constitutional support. The Centre for Democracy & Technology has already filled a lawsuit against this order. Yet, as an early warning that online intermediaries might soon be facing strong regulation, the executive order is not only significant domestically, but also internationally.  

The Executive Order and Social Media: Public and Private Governance

Reigning in social media appeals to many politicians around the world, just like it does to President Trump. At times, for instrumental purposes. As shown by Zeynep Tufekci, social media and President Trump share an interest in maintaining a certain degree of polarisation and polluted information. Platforms themselves apply inconsistently their terms of service, exploit their monopolistic position and use content moderation practices that are far from transparent or accountable. By choosing an inadequate instrument to take issue with the quasi-public powers vested in the hands of profit-driven platforms, the Trump administration adds ambiguity, rather than a clear direction of action to the mosaic of online governance. The role of platforms as “arbiters of truth” and the serious challenges raised by online content moderation at the global scale are real, wicked problems that are here to stay long after the hype around the executive order is over. 

Despite their position as privileged digital squares for meeting and sharing thoughts, ideas, and opinions, social media platforms continue to be treated as a “neutral infrastructure” and self-regulate based on their own community guidelines. When they take decisions at the global level on a public good like speech, they produce externalities at the intersection between private business purposes and public interest. In this case we are not bearing the financial costs linked to traffic jam or the erosion of roads, but we face the long-term consequences of restricting free speech, a cornerstone for democracy. To maintain a pleasant environment attracting users’ content and data – which, in turn, feed the advertising revenues, online platforms need to moderate content even if such an activity turns them into arbiters of speech. Indeed, as Tarleton Gillespie has underlined content moderation is a constitutive function of social media’ activities. 

The primary challenges of content moderation have been extensively documented by a number of researchers such as Kate Klonick, Daphne Keller and Evelyn Douek and have more recently also been recognized by European policy-makers. On the one hand, when social media autonomously decide to remove certain user-generated content, their decision can be equated to private censorship and thus understood as a negative externality. On the other hand, content removal also leads to positive effects since social media internalise costs which users would otherwise bear to remedy the harm suffered by objectionable content. From the perspective of the victim, policing content by social media allows users to benefit from this activity rather than claiming against the violation of their rights. In the case of copyright, for instance, a rightsholder would benefit from the activity of content moderation. Even more importantly, content moderation can be considered a necessary cost to remedy the negative externalities of users’ speech in order to ensure that users feel safe to exercise their right to free speech online, and can thus foster democratic values. 

The Executive order issued by President Trump encapsulates the public authority – private governance dilemma, but raises more questions than it answers. Knowing that the negative externalities generated by social media are global, is it better to have a public actor define speech limitations within its jurisdictions or a private corporation applying its own standards for moderating the content of billions of users? As Elettra Bietti underlined, this dilemma cannot be solved by just focusing on the public or the private sector and, referring to Jack Balkin’s free speech triangle, she observed that “the discourse around online speech forms an insoluble circle that needs to be broken”. Moving beyond the dichotomy of public/private functions, the main issue is how to address powers in the information society on a global scale, no matter if they come from the expression of public or private determinations.

The Global Mosaic of Online Speech Governance

The executive order from 28 May 2020 represents a turning point in the US approach to social media over the last twenty five years. Since the adoption of the Communication Decency Act 1996, the liberal path granting immunity to intermediaries for the user-generated content that they host has been a global model for constitutional democracy, shaping the approach to social media regulation all over the world. For example, the European Union has exempted platforms from liability when proving their lack of awareness about the illicit content they host as early as 2000. Over the years, the broad protection granted by the First Amendment mixed with statutory immunity and contract law has virtually restricted any attempt to regulate or make social media liable for their activity of content moderation. 

While we are discussing the future of free speech from a constitutional democracy perspective that stems from the US model, it would be a mistake to neglect what is happening in other parts of the world, where public authorities have taken a different stance to free speech protections. To combat the unprecedented spread of online disinformation since 2016, both authoritarian and democratic regimes have sought to use legislation that criminalises online falsehood and empowers platforms to police content on a broader scale, and, in some case, have gone as far as relying on Internet shutdowns. Legislation of this kind passed so far in more than 12 countries, usually applying vague definitions of “falsehood” and “public interest”, has given social media platforms direct responsibilities for monitoring and taking action on user content. 

This is a stark reminder that, in practice, the 1996 Communications Decency Act protection against legal obligations to monitor third-party content did not stand the test of time. Intermediaries generally take on such responsibilities by contracting content moderators (in limited numbers) and by (extensively) deploying artificial intelligence (AI) tools. Facebook reported that 88.8% of all its hate speech and disinformation removals between October 2019 and March 2020 was proactively detected by AI, compared to only 11.2% of the content being flagged by users. As automated content moderation becomes the dominant practice, unaccountable AI take-downs add to the complexity of treating platforms as mere content hosts when they perform algorithmic edits. 

The growing use of AI in moderating content on a global scale leaves us wondering how the role of  public authorities might transform in the coming years. With little oversight over the algorithmic processes developed by each platform, how can the state ensure that the enforcement of fundamental digital rights is not left to an hybrid standard of protection defined by machines without judicial scrutiny? While the US has always looked at this topic from an internal perspective, the European Union has shown more sensitivity to the extension of constitutional values beyond its borders like we have seen in the Google Spain or Schrems cases. More recently, the Union has slightly changed its approach in Google v. Cnil and Glawischnig-Piesczek v. Facebook, leaving the decision of extending their removal order worldwide to the discretion of Member States.

What Lies Ahead?

Could President Trump’s executive order carve a hole in the wall of social media immunity on a global scale? As it stands today, it is improbable. This approach should be seen as a symbolic step taken by the White House for domestic signalling ahead of the 2020 elections. Across the Atlantic, the current work of the European Commission to review the framework of liability of online platforms as part of the Digital Services Act is more likely to yield results. Subsequently, the European approach might become influential at the global level, as we have already seen in the field of data protection with the judicial activism of the Court of Justice of the European Union and the adoption of the General Data Protection Regulation. In other words, as Anu Bradford underlined, the Brussels effect could play an increasing role on a global scale.

This executive order from the White House is yet another opportunity to think about the future of the right to free speech online when the boundaries between public and private governance are increasingly blurred. Acts of politicization and symbolic threats add complexity to the mosaic of governing online speech, on which our digital society is constantly being built. Going back to the key question of how to deal with powers in the information society,  striking the right balance between public and private responsibilities in online content moderation will take a long time. 

Before clarifying the treatment of the content, the focus should be on procedure and safeguards. Twitter can continue to moderate content maintaining its immunity from being sued, but has a responsibility to explain to us how it is done and to give us access to remedies. It is time to move away from simply defining what is legal or illegal online and strive for real transparency. Decisions on speech can seriously undermine democracy and we need safeguards against that from both public and private actors. In a democratic digital society, we should aspire to know more about what is happening behind the scenes. The decisions on the right to free speech cannot be left in the hands of unaccountable powers, thus, opening the door towards considering tech giants as public utilities.

Share this article!
Share.

About Author

Católica Global School of Law, Lisbon.

Leave A Reply