The Digital Services Act: a Silver Bullet to Fight Disinformation?

0

The Commission’s proposal for the Digital Services Act (DSA) and the European Democracy Action Plan (EDAP), both adopted in December 2020, mark a step change in the EU policy approach to online disinformation. So far, policy initiatives at EU level have spearheaded self-regulation by industry, paired with targeted monitoring actions based on the 2018 Code of Practice on Disinformation. The ambition of the two new initiatives is to move from self-regulation towards a co-regulatory framework in a bid to make big tech companies more accountable for their content moderation policies, and to restore safety and trust in the online informational space. With the DSA entering now the legislative phase at the European Parliament and the Council, the question that many observers are rising is whether the proposed framework is really up to the challenge it purports to address.

 

Filling a legal vacuum

The challenge is formidable and urgent. Public awareness and concerns regarding the role of social media as critical vectors of online disinformation have grown steadily during the Covid-19 infodemic and the 2020 US Presidential campaign. Most recently, after Twitter’s and Facebook’s groundbreaking decision to block Mr Trump’s accounts amid violent protests at Capitol Hill, criticism from civil society, academia and political circles in the EU is focusing on the lack of clear rules to limit the discretionary power of online platforms over the content they host on their services. Within the current legal vacuum, powerful digital players can act as ultimate arbiters of democracy and decide what can or cannot stay online on the basis only of their terms of service (except in case of manifestly illegal content), something that is widely seen as starkly incongruous with the rule of law and fundamental rights.

The DSA’s aim is to fill this vacuum. While its primary objective is to ensure timely and effective removal of illegal information in justified cases (e.g. hate speech, incitement to violence or defamatory information), its ambition is also to address societal concerns stemming from harmful – but not necessarily illegal – online content, such as disinformation. To this end, it sets out an overarching framework combining three elements:

  • a limited set of due diligence requirements for « very large online platforms » (i.e. services reaching 45 million of active monthly users in the EU, or 10% of the EU population), including transparency requirements for content ranking algorithms and advertising systems, as well as the obligation to self-assess, on a yearly basis, the « systemic risks » arising from the operation of their services, and take appropriate mitigation measures;
  • the possibility for the Commission to invite very large online platforms and other stakeholders to subscribe to codes of conduct when necessary to mitigate systemic risks; and
  • independent oversight and public scrutiny mechanisms (i.e. yearly independent audits, mandatory data disclosures by platforms, new enforcement tools for national regulatory authorities and the Commission, including the power to impose hefty fines (up to 6% of platforms’ global turnover).

The EDAP complements the DSA by announcing inter alia future Commission’s guidance aimed at paving the way for a revised and strengthened Code of Practice on Disinformation.

However, despite its ambitious aims, the DSA remains within the paradigm drawn up twenty years ago in the e-Commerce Directive, which is based on the principle whereby online platforms are a priori exempted from liability for the content provided by third parties and hosted on their services. By carving certain specific due diligence obligations out of this wide liability exemption, the DSA incurs the risk to overlook the complex dynamics that enable the spread of disinformation in the digital space, and fail to provide for all the necessary safeguards.

 

Systemic v. endemic risks ?

The DSA defines disinformation-related systemic risks as an « intentional manipulation of the service » provided by an individual platform, normally involving « inauthentic use or automated exploitation ». While capturing a number of typical artificial amplification techniques enabling the online spread of disinformation (use of fake accounts, social bots, stolen identities, or accounts take-overs), this definition seems to overlook other forms of information manipulation.

As explained in greater detail in a separate analysis, disinformation is a multi-faceted phenomenon whose impact depends on fast-evolving technologies, service-specific vulnerabilities and constant shifts in manipulative tactics. Common forms of information manipulation strategies (attention hacking, information laundering, State-sponsored propaganda) do not necessarily entail an artificial or inauthentic exploitation, but rather a strategic use of a platform’s service. Hoaxes or conspiracy theories are often built up through successive information manipulations on various online resources (bogus websites, fringe media outlets, discussion forums, blogs, etc.) before being injected into mainstream social media with a view to normalising the narrative, or legitimising certain information sources, through authentic interactions among users.  In other cases, the intervention of influencers or statements from political leaders are the direct cause for the viral sharing of deceptive messages across organic audiences on social media. Moreover, recent cases have shown how entire user communities can migrate from one social network to another, with pieces of false information banned on one site reappearing in another, which suggests that disinformation-related risks are endemic to the whole ecosystem.

These examples demonstrate that the assessment of systemic risks by very large online platforms should cover not only those forms of manipulation that may directly affect the security and technical integrity of their services, but also content and source-related manipulations that may occur outside their services while being liable to spin disinformation across their user base.

This point is of key importance as a too narrow definition of systemic risks could severely limit the scope and effectiveness of the mitigation measures and other safeguards (self-regulation, independent audits, public scrutiny and sanctions) provided for in the DSA.

 

How to ensure a more effective detection?

Given the complexity of information manipulation tactics, it is doubtful that their effective detection and analysis can rely solely on platforms’ self-assessments. In order to develop proper identification responses, the DSA should include also the possibility for vetted researchers and fact-checkers to issue alerts triggering the obligation for the platform concerned to expediently carry out internal investigations, by analogy to what it provides already for trusted flaggers in case of  illegal content.

Moreover, online platforms should be encouraged to promote exchanges of information between their security teams, notably to facilitate an early detection of covert coordinated networks or « cross-platform migration » cases. Also, it is unclear why the DSA contemplates the possibility for users and civil society organisations to provide notices to platforms concerning illegal content, but not as regards possible instances of disinformation.

 

What type of mitigation measures ?

The DSA does not shed much light as to what is actually required from platforms in order to mitigate risks emerging from disinformation. As the EDAP has announced future Commission’s guidance to steer the upcoming revision of the Code of Practice, it may be expected that the multi-stakeholder dialogue that will accompany this process will provide clearer orientations on a number of critical outstanding issues. Principles for content moderation, responsible algorithmic design and demonetisation of websites using disinformation as click-bait to attract advertising revenues are important areas where the Commission’s steer will be essential in order to avoid that too much discretion is left to large digital players and to provide a robust legal basis for an independent oversight of their policies.

Share this article!
Share.

About Author

Leave A Reply