Freedom of expression online – not dead yet, but getting there

0

Attacks on Paris and Brussels in the past year have put the Western world on high alert, and have given rise to a conversation about the role of speech—especially the content shared on social media.

“The recent terror attacks have reminded us of the urgent need to address illegal online hate speech,” said Vera Jourova, EU commissioner for justice, consumers and gender equality. “Social media is unfortunately one of the tools that terrorist groups use to radicalize young people, and racists use to spread violence and hatred.”

While the motivation behind these thoughts might seem well-intentioned to some, the manner in which the solutions to these issues are being carried out is deeply problematic, to say the least.  The whole ongoing process could be summarized to four simple steps on how to silence freedom of expression online:

 

1. Use the press wisely – in a shallow manner. (i.e. “We” need to do something about something bad”)

We all think that terrorism is bad. But the EU institutions seem to believe that the key task is to pile enough PR problems on Internet companies like Google, Facebook and Twitter so that this becomes their problem and they become obliged to do, well,  “something”.

France previously placed blame on Facebook and Twitter for failing to curb terrorist activity on the platforms and Germany previously convinced Facebook, Twitter, and Google to agree to a 24-hour removal rule for hate speech. The sole principle that internet companies can and should be policing online speech becomes more important than the effectiveness of this policing or any counterproductive effects that it may have. As the media rarely goes into the core of issues, there is little or no danger that a detailed analysis of whether the measure is actually helping or harming society will be undertaken.

 

2. Keep the rules on intermediary liability as vague as possible

When the E-commerce directive was drafted around the year 2000, there were few blogs and social media. The online platforms have changed considerably, making it difficult to apply the old rules to this new world. Where do Facebook or Twitter fall under the Directive? Are they mere conduits, caching, hosting service or all of these at once? At what moment does an intermediary obtain actual knowledge of the illegality of certain content – at the moment it receives notifications or only after a court order to remove certain content? Countries through Europe have answered these questions differently in their national legislations. Therefore, as long as companies are uncertain about their legal liability, they will play it safe, remove all content that might even possibly lead to its liability and will always prioritise measures that protect their own profits and market share. Moreover, their decision-making contains few or no procedural guarantees (e.g. possibilities for recourse in case of removal of ‘lawful’ content) for those whose right to freedom of expression is interfered with. This creates a situation where the intermediary has significant power but limited or no responsibility and can therefore be pressured by governments into becoming “gatekeepers” of the Internet by different policy changes.

 

3. Start the process “of doing something” but leave out civil society actors from paticipating in the process

Launched at the end of 2015, the “EU Internet Forum” was meant to counter vaguely defined “terrorist activity and hate speech online”. The discussions were convened by the European Commission and brought together almost exclusively US-based internet companies and representatives of EU Member States.

No outside civil society organizations were invited to participate in discussions on terrorism, the groups maintained, although several were allowed to participate in talks on online hate speech. The groups were excluded completely from participation in the EC’s talks with technology companies, which led to the Code of Conduct released in June 2016. As a result, European Digital Rights (EDRi) and Access Now, two of the EU’s most prominent lobbyists for online rights they said they would not take part in future discussions taking place under the banner of the Commission’s “EU Internet Forum.”[1]

Although Access Now was not part of the discussions, it was asked to endorse the process, said policy analyst Estelle Masse. The code calls for civil society organizations to play a role in flagging content that incites hate or violence, but provides little in way of detail.

When governmental intervention such as this initiates negotiations that will inevitably have an impact on human rights online, it is advisable for the civil society voices who uphold individual rights to freedom of expression to be present in the negotiations. Such an exclusion makes it clear that this is an act of government policy, not a matter of voluntary cooperation.

 

4.  Create a non legally binding document that only adds to the already impossibly unclear mishmash of law and terms of services

Here is a summary of the ”code of conduct” as announced on May 31, in a statement by the European Commission:

“By signing this code of conduct, the IT companies commit to continuing their efforts to tackle illegal hate speech online. This will include the continued development of internal procedures and staff training to guarantee that they review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary. To be considered valid in this respect, a notification should not be insufficiently precise or inadequately substantiated. The IT Companies are to review such notifications against their rules and community guidelines and where necessary national laws transposing the Framework Decision 2008/913/JHA.

In practice, this means that the illegal content will be banned by terms of service. A very similar project called “Clean IT” (also funded by the European Comission) suggested that terms of service “should not be very detailed”[2], allowing them to take ad hoc and arbitrary policing measures or in other words to maximise the potential for unilateral action on the part of the service provider, an, indirectly, by the EU Institutions.

Connected to the previous point, what are the criteria for a valid notice under this regime? The Code describes a valid notice as “not … insufficiently precise or inadequately substantiated.” How will such notices compare to notices issued under the E-Commerce Directive? What information are these notices required to include?

Based on the aforementioned, it is obvious that it is expected from the private companies to take“the lead” in fighting illegal hate speech online. Twitter’s head of public policy for Europe, Karen White said: “We remain committed to letting the tweets flow. However, there is a clear distinction between freedom of expression and conduct that incites violence and hate.”

Is there one really? And can the distinction be drawn by private companies in a time frame of less than 24 hours?

The Code of Conduct specifically refers to the Framework Decision on Combating Certain Forms and Expressions on Racism and Xenophobia by Means of Criminal Law (Framework Decision) as the legal basis for defining illegal hate speech under the Code. But as Article 19 has pointed out, “the Framework Decision fails to provide a clear benchmark in relation to the definition of hate speech”.

The Delfi AS v. Estonia and Index.hu Zrt v. Hungary reiterated the endorsment of the notice-and-take-down by private companies deciding on the lawfulness of content. But this approach is already isolating the ECtHR since in many jurisdictions (Spain, Italy, Netherlands, Finland etc.) intermediaries become liable for “unlafwul” content only when they fail to react following notice from a judge or another independent body. [3]

The Code did mention that firms will need to “provide regular training to their staff on current societal developments and to exchange views on the potential for further improvement.” Proponents of the initiative argue that in the aftermath of the recent terrorist attacks in Paris and Brussels, a crackdown on “hate speech” is necessary to counter jihadist propaganda online. But do we have clear guarantees for the quality of this training and that the staff wont have any ideological bias? Intermediary service providers are simply less well-placed than courts to consider the lawfulness of comments on their website domains, especially under an unclear regulative frame such as the present one.

Qualifying speech as hate speech is a very difficult and delicate exercise, not only for domestic courts, but also for the European Court of Human Rights.  This is illustrated by case-law of the Strasbourg Court itself, as various cases (e.g. I.A. v. Turkey;  Lindon, Otchakovsky-Laurens and July v. France;  Féret vBelgium and Perinçek v. Switzerland), concerning the question whether certain speech could or should be qualified as hate speech resulted in divided votes (see also Vejdeland and others v. Sweden, especially the discussion in the concurring opinions).[4] Not to mention that ISIS uses social media to perform its recruitment-oriented “theater,” presenting a carefully packaged image of itself as the fulfillment of a kind of ultimate jihadi fantasy. When dispersed via interactive social media, the imagery and ideology contained in this “theater” implicitly “normalizes” extreme attitudes toward concepts such as “jihad” and “martyrdom” by permitting the audience members to feel included in virtual groups of like-minded individuals.[5] Thus, social media guides the psychological changes that underlie radicalization, which takes time and is not always as apparent and manifestly illegal as the proponents of private companies removing content would like to think.

 

5. Social media platforms – new public spaces?

The slim document that is the „Code of conduct“ isn’t legally binding for the internet companies, instead, it establishes “public commitments” for the companies.

So where do Twitter and Facebook fall? Are they merely transmitting messages and thus free from liability? Or do they bear added responsibility for what their users say because they explicitly reserve the right to exercise control over that speech? Or are they a new thing altogether — and if so, where should we rethink our new de facto public spaces before we completely abandon the basic human rights principles?

Our right to freedom of expression is laid out in law by the EU Treaties. To ensure democracy and accountability, this fundamental human right may not be restricted unless it is necessary, achieves an objective of general interest and the measure to restrict it is provided for by law. This right is now being actively removed by the European Commission.

The current “pressure” method of establishing regulation outside an accountable democratic framework, exploits unclear liability rules for companies.

Is this really how we are going to create credible regulation on fighting terrorism?

If, after every terrorist attack, legislators enact new laws and “codes” with the hope of creating a feeling of more security and safety in the population, without actually re-thinking the efficiency or democratic legitimacy of such processes, we will have less freedoms and rights in exchange of no real solution.

If these companies cave to the EU, then what’s to stop Russia from asking for the policy changes with regard to pro-LGBT speech ? Or in Turkey – where anti-government speech is deemed “un-islamic”?

Maybe the “Code of conduct” is just good PR for these companies, maybe it’s a step toward sliding into sliding into an undemocratic world of corporate censorship.

We’ll see.



[2] EDRi, „Human Rights and Privatised Enforcement“, 2014.

[3] This approach was supported by Manila principles on intermediary liability, 2015.

Share this article!
Share.

About Author

Tihana Krajnović is currently at her final year at the Faculty of Law, University of Zagreb. Some of her past extracurricular activities include participating in Price Media Law Moot Court Competition in Oxford, working as a national researcher and academic coordinator in two ELSA International and Council of Europe Legal Research Groups on online hate speech and social rights, volunteering at the University of Zagreb’s Legal Clinic (Anti-Discrimination and Protection of National Minorities Rights Department). Her research interests include data ethics; big Data; AI; machine learning; algorithms; robotics; privacy; data protection and technology law, European, international and human rights law.

Leave A Reply