ISP and harmful content. Can technology redeem itself?



In the last years we have witnessed an emerging tendency from European institutions, both legislative and judiciary, of addressing growing liabilities to Internet Service Providers for what it concerns illegal or harmful material hosted on their platforms.

The Digital Single Market directive (DSM) of 2019 for example, in particular regarding its Article 17, has imposed a duty for ISP hosting large amount of works to “obtain an authorisation from the rightholders referred to in Article 3(1) and (2) of Directive 2001/29/EC (InfoSoc), for instance by concluding a licensing agreement, in order to communicate to the public”. Consequently, in the case that this agreement would not be possible, the ISP “shall be liable for unauthorised acts of communication to the public” unless they have demonstrated that they have made “best efforts to obtain an authorisation” and also “best efforts to ensure the unavailability of specific works” indicated by the rightholders and in any case they have “acted expeditiously, upon receiving a sufficiently substantiated notice from the rightholders, to disable access to, or to remove from their websites, the notified works.”

This directive, and Art.17, has generated strong debates and harsh criticism. Many scholars have emphasized how this new legal regime would undermine the framework we have experienced so far, without providing the right tools and the sufficient legal certainty to create a new one. The ISP in fact – in case of lack of agreement with rightholders and in order to avoid legal liabilities – would start monitoring and filtering users’ content. The situation depicted above though, would be incoherent with the safe harbour doctrine and in clear contrast with the e-Commerce Directive of 2000 and its well-known prohibition on general monitoring obligation[1].

Apart from its flaws, the DSM directive shows a cautious conceptual change, an opening to a modification of the current ISP’s legal framework towards a mechanism of enhanced liabilities.

However, in this article the focus will be more on harmful content – such as hate speech or fake news –than protected works. Therefore, even if these topics are interconnected when we talk about ISP and user generated content, there have been other examples coming from European Institutions more inherent to the aforesaid matter.

The “EU Code of conduct on countering illegal hate speech online” was adopted in 2016, signed by EU commission and some of the main ISP (Microsoft, Facebook and Google among others), it is a piece of legislation that definitely pushes for a stronger collaboration between Institutions and tech giants, suggesting for some new forms of autoregulation and internal surveillance by providers. This text is not binding and it serves mostly  as a programmatic inspiration, nevertheless it has indeed some value when it identifies the providers as the most efficient – from a merely technical perspective of course – entities when it comes to counteract hate speech online[2].

The European Court of Justice (EUCJ) has also recently spoken on similar matters with a ground-breaking decision: the case of Eva Glawischnig-Piesczek v Facebook Ireland Limited. Eva Glawischnig-Piesczek is an Austrian politician who brought an action to the Court of Vienna in order to oblige Facebook to remove a public defamatory post regarding her, and “to cease and desist from publishing and/or disseminating photographs showing the applicant if the accompanying text contained the assertions, verbatimand/or using words having an equivalent meaning as that of the comment referred…”[3]

The question reached the Supreme Court of Austria, which addressed it to EUCJ asking whether the elimination of identical and equivalent content at a worldwide level would be consistent with the art.15 of the Directive 2000/31 and its prohibition of a general monitor obligation.

The EUCJ stated that it was not a violation of the e-Commerce directive and obliged Facebook to remove the identical and equivalent contents, affirming that it was not a general monitor of obligation but an imposition to remove specific information identified in the injunction and equivalent information related to it. This equivalent information “must not, in any event, be such as to require the host provider concerned to carry out an independent assessment of that content […] since the latter has recourse to automated search tools and technologies.”[4]

This case has raised a lot of perplexity, mostly because it has significantly diverged from the precedent jurisprudence, which was faithful to the safe harbour and careful not to impose any monitoring burden on the ISP[5]. The Court has also explicitly mentioned “automated search tools and technologies” introducing the concrete possibility for ISP to use software in order to detect illegal contents on their servers.

Main objections to an enhanced liabilities system for ISP

The criticisms towards this judgment and overall towards a new system introducing growing liabilities for ISP are several and often well founded, but in my opinion not insuperable:

  1. Freedom of speech.

This is by far the widest objection to the idea of ISP filtering content via automated tools or not. To narrow the matter down, here we will not discuss the American concept of freedom of speech: the protection granted by the First Amendment and the marketplace of ideas. We will observe the topic from a European perspective.

Within Europe’s borders, freedom of speech has always been seen as a fundamental right: since the Handyside case of 1976, the European Court of Human Rights has recognized to Freedom of Speech the role of cornerstone of every democratic society, protecting even speech that “offend, shock or disturb”[6]. Nevertheless, from the ECHR jurisprudence we can understand that it has always been a question of balancing fundamental rights. The rights that need to be balanced with freedom of expression in this context are usually the right of privacy or honour and dignity of the subject, and the right of economic initiative of the ISP. So, a limitation of Freedom of speech is permitted through a balancing process and the ECHR has two tools in order to operate this assessment: Art. 10 and Art. 17 of the European Convention on Human Rights.

Article 17 was rarely used, so much so that there were those who questioned its value, but in recent years it has resurfaced in some Court’s judgments. This article is a disqualification measure, a deprivation of the protection granted by the Convention to those accused of using the rights guaranteed by the Convention for liberticide purposes. Therefore, the underlying rationale of the article is to protect the kind of society that the Convention has sought to shape and structure: a democratic society that is capable of protecting itself and that, in doing so, has the possibility of taking a step outside itself, prohibiting its enemies from hiding behind those libertarian guarantees that constitute its essential foundations. It is a remedy to be used cum grano salis,since it could be distorted and used as a pretext for abuse: on the basis of Article 17, for example, the Turkish Constitutional Court had legitimized the dissolution of political parties, an act which was then condemned by the European Court[7].

Article 10, unlike Article 17, is bipartite. The first paragraph states freedom of expression in its various forms, while the second paragraph lists the limits and restrictions to which it may legally be subject. If it is hate speech, therefore, through art.10.1 it is placed in the freedom of expression, but it is possible to condemn it and limit it thanks to the exceptions present in art.10.2. The Court has developed a well-established case law in its decisions on free speech, anchoring the legitimacy of a sentence restricting freedom of speech according to the canons expressed by the ECHR – this case law being the result of an extrapolation of the rules contained in Article 10.2 – to the severe and ascertained passing of a three step test[8]:

  •  There must be a legal provision to justify it.
  • Such a limitation must pursue legitimate aims.
  • It must be considered necessary in a democratic society.

Therefore, Freedom of Speech it is indeed a fundamental value that need to be protected and fostered, but the EUCJ has shown in several occasion that it can be limited according to the strict test provided by ECHR.

1.1  Private Censorship and Dictatorship of Algorithm

This argument descends somehow from the protection of freedom of speech. It has been brilliantly summarised by Daphne Keller, professor at Stanford University, with the dystopic prediction: “Making Google the Censor.”[9]The danger of giving censorship powers to these giants of technology exists and requires an effort in order to avoid it. For example, according to the former president of Italian authority for communication guarantees, Giovanni Pitruzzella, choosing this path could lead to a sort of private censorship, but it could also be the best choice to follow in the future. He argues that a series of new rules should be implemented, adapted to the peculiar dynamics that freedom of information takes in the age of the Internet. It would become necessary to enhance and stimulate the innovations that have already emerged spontaneously within the main platforms about content moderation, and to go further towards a new regime of freedom of information through new laws and with an ex postintervention – from  a purely subsidiary  point of view – of new, independent and highly specialized public institutions in case of litigation. These ISP have nowadays the same power, the same wealth and the same population – If we look at users as citizens – of  actual Nations, we should understand that they therefore can put in action the same repressions regarding freedom of speech of a national Government. They may therefore require monitoring and more specific rules on obligations to protect and safeguard certain fundamental rights.[10]

For what it concerns the fear of a dictatorship of algorithm, some argued that A.I will not be able to recognize satire or parody and will stumble upon false positives and faulty removals. This is indeed a concrete possibility, but – since we will lay these foundations also thinking about the future – we should bear in mind that these software are developing and getting better at an incredible pace, and it is reasonable to expect them to reach a level in which also parody would be recognized, maybe with a necessary collaboration of users flagging the different types of contents and helping the machine to learn[11]. Moreover, we already live in a dictatorship of algorithm: they already provide us with the information they want to show, they know what we like and what we want to see. For example, we see just a small part of what happens on our Facebook feed. According to a 2016 survey, the 66% of Facebook users gather their information from social media: they draw their news from echo-chambers carved by algorithms specially for them, so why can’t we demand ISP to at least provide us a clean from hate and fake news environment? Algorithm could for example penalise fake news website and not show them to us, as they already do for what we don’t like and for a lot of opinions and information that do not suit our vision of the world.[12]

  1. Economic costs

The implementation of these software could create market barriers for ISP that do not have the economic power of Google or Facebook. It is a valid point, but there could be put some restrictions, this software may be mandatory just for ISP with a determinate number of users or of traffic in their server, since it is them that assign hate speech and fake news the power they have nowadays.

The technical tools

Hate online is different from hate in the atom world. As explained in an interesting report by UNESCO of 2015, it has some peculiar characteristics, intrinsic to the new media of Internet, that makes it more dangerous, resistant and spreadable than hate offline[13]. For these reasons, the biggest ISP have already implemented different software for content moderation.

One of the best is Perspective API from Google. It is utilized by newspapers such as the New York Times and El País in their online version. It filtrates users’ comments automatically, but it has also  an amazing tool – which is in its embryonic phase and still experimental – endowed with an educational function: it shows the user a real time score of the “toxicity” of the comment he is typing,  warning him whether  it is considered to be offensive or not[14]. This is an important perspective in counteracting hate online, it could sound like a cliché but the education of the average user will be fundamental in this battle, and this type of software could play a major role in it. After the implementation of this algorithm, El País have noticed that people comment 19% more than before[15]. While the New York Times said that it “was able to triple the number of articles on which they offer comments, and now have comments enabled for all top stories on their homepage”[16]. This seems to show how an hate-free environment could spur conversation and public debate.

Amazon owns the famous video game streaming platform Twitch, in which a commentary system is also integrated to the video game streaming. These comments are controlled by AutoMod, a semantic analysis software, which also intervenes in real time to block potentially inappropriate comments. Every streamer has its own channel and can decide to block determined words or expressions, with AutoMod always assisting him. In this way the platform delegates some of this monitoring work to the streamer himself.[17]

Reddit, a social website born to favour discussions on every kind of topic, even the most shocking or disturbing and in which there are millions of messages daily, has found an interesting way of dealing with haters. Thanks to the very architecture of the website and to the help of the community, it isolates users and community of users that spread hate: with such a method, haters continue to operate but are not able to spread propaganda, they do not have enough visibility to be harmful and – at the same time – cannot declare themselves as martyrs of the free speech. We could say that this is a way of aiming for a hate-free platform by design.[18]


The DSM directive, the Glawischnig-Piesczek v Facebook judgment, the EU Code of conduct on countering illegal hate speech online are all signals from different European Institutions of a changing approach to the matter. From the other side, the ISP are developing algorithm for content moderation on their own. Recently, Twitter – the most American, when it comes to freedom of speech, of all the social media – has decided to “intervene in order to remove tweets that run the risk of causing harm by spreading dangerous misinformation about Covid-19”.[19]

Hate speech and fake news are becoming a huge problem in a society where the time spent online and the influence of Internet on people are growing exponentially. The e-Commerce directive of 2000 has probably run its course: it was meant for another Internet, for another ISP and for another society. However, this is undoubtedly a matter we cannot leave to the ISP, therefore Institutions should act bravely and try to unify a subject that it is becoming everyday more fragmented and uncertain. While the Legislator must set the rules and the limits, the ISP are the only one who have the technical ability to effectively tackle the issue, and at the same time an educational process of the user shall take place because the work of the community – through flagging systems for example – will be essential.

It is a complicated matter, it needs a joint effort from different parties in order to work well and it will always be a question of balancing rights and how to do it in a proper way, but the times seem ripe for an attempt in this direction.

Content moderation software are not to be considered as the panacea, they are mere tools, as useful and beneficial as the hand governing them; but if we want to make a step towards this new framework , why can’t we give technology the possibility to mitigate the very issues that it has contributed to exacerbate?





[1]See M. L. Montagnani, A. Trapova, NewObligations for Internet Intermediaries in the Digital Single Market— Safe Harbors in Turmoil?, in «Journal of Internet Law», January 2019.

[2]See P. Faletta, Controlli e responsabilità dei social network sui discorsi d’odio online, in «Medialaws», 1/2020.

[3]See EUCJ 2019/821, 14.

[4]See EUCJ 2019/821, 45-46.

[5]See among others Scarlet v. SABAM, EUCJ 2011/771. Even though, also in this judgment, it was more a matter of balancing rights at stake rather than an in totoprohibition of filtering content, see Cfr. G.PITRUZZELLA, O.POLLICINO, S.QUINTARELLI, Parole e potere. Libertà d’espressione, hate speech e fake news, Egea, Milano, 2017, pp. 20.

[6]See EUCJ 1972/5493.

[7]See. F. TULKENS. When to say is to do. Freedom of expression and hate speech in the case-law of the European Court of Human Rights, in «Seminar on Human Rights for European Judicial Trainers», 9 Ottobre 2012, pp.3/5.

[8]See R KISKA, Hate Speech: a comparison between the European court of human rights and the United States supreme court jurisprudence, in «Regent University Law Review», 25, 2012, pp.122.

[9]See website:

[10]See Cfr. G.PITRUZZELLA, O.POLLICINO, S.QUINTARELLI, Parole e potere. Libertà d’espressione, hate speech e fake news, Egea, Milano, 2017, pag. 90/93.

[11]See Ibidem, pp. 115.

[12]See Ibidem, pp. 61.

[13]See website:

[14]See Cfr. G.PITRUZZELLA, O.POLLICINO, S.QUINTARELLI, Parole e potere. Libertà d’espressione, hate speech e fake news, Egea, Milano, 2017, pag. 116.

[15]See website:

[16]See website:

[17]See G.PITRUZZELLA, O.POLLICINO, S.QUINTARELLI, Parole e potere. Libertà d’espressione, hate speech e fake news, Egea, Milano, 2017, pag. 118.

[18]See G. ZICCARDI, L’odio online. Violenza verbale e ossessioni in rete, Raffaello Cortina Editore, 2016, Milano, pp.237.

[19]See website:

Share this article!

About Author

Leave A Reply