Year 2020 has been all but ordinary. The pandemic has further accelerated the digital transformation and strengthened the role of digital platforms in citizens access to the news and other relevant information. It has increased the concerns over the spread of disinformation on such platforms and, more broadly, over the impact that platforms data-driven opaque systems of content moderation, targeting and recommendation have on our democratic discourse. The last month of 2020 is now marked by an attempt of the European Commission to address these challenges and power imbalance between platforms and other actors in the information sphere. On 15 December 2020, the Commission released its proposal for the Digital Services Act (DSA), as a part of a wider legislative package, which is to replace 20 years old e-Commerce Directive. While this proposal is only the beginning of the years long legislative process and national transposition, it already shapes the debate in this area. Especially as it is accompanied by the European Democracy Action Plan (EDAP), revealed on 3 December 2020, and containing some immediate actions to improve the conditions for informed citizenship and participation in the democratic systems by tackling disinformation and manipulation of the public debate, and enhancing electoral integrity, media freedom and pluralism. This is all part of the comprehensive policy of the European Commission on digital sphere that covers not only economic, but also the rule of law in EU, fundamental rights, and democracy.
By acknowledging that digital technologies have transformed democratic politics bringing in more opportunities for civic engagement, the EDAP stresses how the rapid growth of online campaigning and online platforms also creates conditions for vulnerabilities to the integrity of elections, and the Plan calls for policy intervention that could enhance the “resilience” of democracy. Amongst the various interesting measures that are mentioned in the EDAP (such as “a legislative proposal on the transparency of sponsored political content” that complements the rules on online advertising in the Digital Services Act and proposal to restrict “micro-targeting and psychological profiling in the political context”), the developments regarding the Code of Practice on Disinformation are particularly interesting to consider more in detail, as they contribute to shaping the new regulatory trends for online content deemed harmful, but not necessarily illegal, and for online political communication, in particular during electoral periods.
As is it widely known, the Code of Practice on Disinformation is a sort of self-regulatory instrument, signed by representatives of online platforms, leading social networks, advertisers, and advertising industry, that committed to put in place actions to limit the spread of disinformation/harmful content. The commitments are built around five pillars: demonetization of purveyors of disinformation, transparency in political advertising and issue-based advertising, closure of fake accounts and limit bots, empowerment of consumers-also through more findability of trustworthy content-, and empowerment of research community (from fact-checkers to academia). The Code has been drafted and signed in 2018, but, since then, it has been hard to appreciate the effectiveness of its implementation, due to the absence of standards for its evaluation and for reporting, lack of oversight on the compliance, lack of sanctions for non-compliance, and lack of data against which to check the statements and reports created by platforms themselves. The Code relies on signatories’ self-reporting, and lacks, so far, a methodology and indicators against which the information from these reports can be independently verified. This all makes it quite difficult to consider the Code a real self-regulatory instrument.
Even if this instrument has been object of (well-grounded) criticism, it must be acknowledged, anyway, that it is a first step towards a process that the EDAP and the DSA have unveiled these days. The experience of the Code of Practice on Disinformation and the debate around its implementation marked a first step in the process of elaborating a strategy towards co-regulatory measures. This is now explicitly announced in EDAP. While the DSA is expected to design a horizontal framework for regulatory oversight, accountability, and transparency of platforms, EDAP is already stepping up efforts to revise and strengthen the Code and to transform it from self-regulatory to co-regulatory instrument. There seem to be an agreement on this path by different stakeholders and the revision of the Code is planned to involve not only platform but also advertisers, media, civil society, fact-checkers, and academia. In cooperation with the European Regulators Group for Audiovisual Media Services (ERGA), the Commission is also seeking to set up a more robust framework for monitoring of the strengthened Code and its implementation.
The new framework for monitoring the effectiveness of platforms’ policies against disinformation will be based on a new methodology, which includes principles for defining Key Performance Indicators (KPIs) and ensures access to relevant data.
In this context, we would like to use the opportunity of this post to present a project that could contribute to this ongoing process of finding new methodologies to tackle new regulatory challenges and work as a sort of laboratory and pilot for new regulatory solutions: the European Digital Media Observatory (EDMO), launched in June 2020 at the European University Institute in Florence and funded by the European Commission. EDMO is structured as a hub and platform for exchange and collaboration between researchers, fact-checkers, platforms, policymakers, and all other relevant actors in the area of fight against disinformation. One of the tasks of EDMO is to establish a framework to provide secure access to data of online platforms for research purposes, by developing an inclusive framework in line with applicable regulatory requirements. Functional data access is the key dimension of transparency of platform operations, and the precondition for any meaningful assessment of the impact that policies od platforms may have on information environments. To this end, EDMO has announced its plans to set up a Working Group on ‘Access to Data Held by Digital Platforms for the Purposes of Social Scientific Research’, whose specific task will be to develop a Code of Conduct under Article 40 of the General Data Protection Regulation that will lay out privacy-compliant processes for data access.
Another task of EDMO, and one of a great relevance for the Commission’s plan to set up a more robust framework for monitoring of the strengthened Code of Practice on Disinformation, is to develop a sound and feasible methodology that will include KPIs to allow for objective, independent and comparable assessment of the performance achieved under the Code. The Centre for Media Pluralism and Media Freedom (which coordinates this task under EDMO and to which the two authors are affiliated) has analysed the key commitments and principles upon which the CoP is based and has been building a methodology design, which encompasses (a) service-level and (b) media system-level assessment.
The service-level assessment is composed of a qualitative and quantitative aspect. The qualitative aspect requires a meaningful and clear explanation of the definitions/criteria/procedures used to detect or distinguish certain phenomena (such as: trustworthiness, political advertising, purveyors of disinformation, etc.). The quantitative aspect is designed as a set of KPIs of progress towards an intended objective of the Code. The KPIs should, on one side, benefit from the collaborative capacities, networks and information provided by EDMO, but, on the other, will also require data provided by the CoP signatories. In order to evaluate the impact of platforms policies within a specific national context, service-level assessment is supplemented by the media system-level examination, which uses available multi-country research (e.g. the Media Pluralism Monitor) to indicate vulnerability/resilience of a member state towards mis- and disinformation. Overall, EDMO strives to develop and test the methodology that is: inclusive (considering current and potential future signatories of the Code); feasible (capable of being implemented on a regular basis under different forms of regulatory regime); mixed methods based (combining quantitative and qualitative indicators); and data informed (relying on an increased transparency of platforms and functional data access).