Online content and defamation: the emerging British approach


The British government started earlier this year to consider new legislation on the topic of defamation, with the publication of a consultation on a draft Defamation Bill in March 2011.[1] Defamation, both online and offline, has been a controversial area of law in England and Wales. For instance, the presumption is that a allegedly defamatory statement is false, with the burden being on the maker of the statement to prove that it is true – in many other jurisdictions, the onus is on the party which alleges defamation to prove that the statement at hand is false, and also the norm in England and Wales in civil cases is for the plaintiff to show that the defendant is liable for the alleged wrong on a balance of probabilities. Furthermore, the current state of defamation law has given rise to the phenomenon of ‘libel tourism’, in which plaintiffs view England and Wales as an advantageous jurisdiction in which to file cases due to the aforementioned burden of proof on the defendant, especially compared to other jurisdictions such as the (also anglophone) United States which provide defendants with more extensive defences.

The consultation also examines the position of defamation online, and especially liability for defamatory statement by Internet intermediaries e.g. Internet Service Providers, user-generated content platform providers etc. The current situation for this latter group is somewhat ambiguous:  Section 1 of the existing Defamation Act 1996 contains a defence available to people who are not the author, editor or commercial publisher of a defamatory statement, that ‘secondary publishers’ such as ISPs can use so as not to be liable for third party content which is defamatory if they can show that they took reasonable care in relation to its publication and they did not know that their action caused or contributed to the publication of a defamatory statement. Yet the consultation on the draft bill mentions that this provision may not be sufficiently clear and protective of secondary publishers given developments on the Internet such as the prevalence of user-generated content. The consultation also mentions that thus far the legal position relating to defamatory content in blogs and discussion forums is not well-established in case-law, but e.g. a blog owner could be viewed as having editorial control over the content of posting and thus the opportunity to remove any material considered to be potentially defamatory, in the absence of which potentially being liable for defamation due to that content. The consultation solicits responses on the reform of this provision given the new technological environment.

In October 2011, a Parliamentary Joint Committee published a report on the bill, which also commented on the issue of Internet publication.[2] The Committee proposes that there should be a new notice and take-down procedure to cover defamation in the online environment. It recognises that the current law in this area in fact encourages Internet hosts and service providers ‘to ignore any dubious material but then remove it without question following a complaint’, which can on the one hand leave defamatory statements online for long periods of time, and on the other hand, once a complaint is received and material is removed, this can also result in entirely legitimate comments being taken down. Thus the recommendation to the government is that pressure on hosts and service providers to take down material challenged as defamatory should be reduced (in line with the protection of free speech) and that site owners should be encouraged to moderate content written by its users in a way which balances free expression and the protection of reputation. Regarding the procedure for taking down material, the committee distinguishes between material which is identifiable in terms of authorship and that which is not. With identifiable material, the committee suggests that once a complaint is received about it being allegedly defamatory, the host or service provider should publish a notice of complaint alongside that material, but is not required to remove the offending material, and so this protects free speech. If the person making the complaint wishes, they can also apply to a court for a take-down order. Regarding unidentified material, the recommendation is that any such material should be taken down by the host or service provider on receiving a complaint, unless the author responds to a request to identify themselves, in which case the procedure for identifiable material should be followed. If the host or service provider believes that there are significant public interest reasons that justify publishing the unidentified material then it can apply to a judge for an exemption from take-down, and secure a “leave-up” order. If hosts and service providers comply with these rules, then they should not be liable for defamation. The committee believes that this procedure should apply equally to online sites that are moderated and those that are not. The committee also makes its wish known, hoping that ‘over time, people will pay less attention to and take less notice of material which is anonymous’.

Inasmuch as the situation for online content which is potentially defamatory should be clarified, these developments are welcome. Nevertheless, two criticisms can be made of the approach so far. Firstly, the maintenance of anonymity seems to be characterised by the Parliamentary Committee’s report as being undesirable, and seems to be undesirable also beyond the specific situation of defamatory comments being made by an anonymous Internet user. This view on the undesirability of anonymity may turn out to be a tendency which could have a chilling effect on free expression online, particularly with the increasing government and corporate surveillance of the medium, not all of which is desirable or even legal. Secondly, the Parliamentary Committee’s report emphatically makes no distinction between different kinds of online ‘secondary publishers’, especially in terms of the procedure to be followed for dealing with potentially defamatory material. There are many different types of ‘secondary publisher’ on the Internet, ranging from large, well-organised corporations (e.g. social networking sites) to small, personal, non-profit operations (e.g. a personal blog), and they may well have different capacities with which to fulfil regulatory obligations. However, this subtlety has been overlooked, or indeed not at all taken into account by the Parliamentary Committee’s report, which characterises these Internet content intermediaries as a monolithic block.



Share this article!

About Author