Time to act against fake news

0

The piece was originally published on euractive.com on 24 November 2020

The COVID outbreak has shown the importance of reliable news for citizens and democracy. However, this “trust halo” may be short-lived. Meanwhile, platforms are under pressure to reduce the visibility of disinformation: the US post-election period shows that flagging ‘fake news’ is not enough.

As the EU moves from self-regulation of platforms to co-regulation, ‘regulatory guidance’ is required for ensuring prominence to trustworthy news sources, under both the (slow) legislative process of Digital Services Act, and also the (quicker) action plans.

Critical Juncture

Large online platforms are both strategic vectors of disinformation and gateways to journalistic sources. As recently acknowledged by the Commission, Facebook, Google, Twitter, Microsoft and TikTok have stepped up efforts to curb disinformation since the outbreak of the Covid crisis.

Yet, this is reactive more than proactive. Lack of transparency, oversight and sanctions are major flaws of this self-regulation. This view is shared by European audiovisual regulators, by the Commission, and by the disinformation Sounding Board.

Last September, France, Lithuania and Latvia agreed to a Joint Statement on protecting democracies. This urges the EU to adopt a regulatory backstop, including obligations for “basic standards for social responsibility in algorithmic design”. This call may be echoed in the December European Council conclusions on safeguarding a free and pluralistic media system.

Basic Standards for responsible algorithmic design 

One’s daily news increasingly depends on content ranking algorithms and recommender systems, prioritizing what is displayed in social media feeds and search queries. These systems are driven mainly by signals reflecting the popularity of posts and individual preferences (inferred from tracking users’ behaviour).

But we still do not know much about input signals reflecting trustworthiness. This opacity fosters virality of disinformation. And as virality is a powerful driver of advertising revenue, adjusting algorithms for quality cannot be left to platforms alone.

No silver bullet: several Indicator Providers feeding platforms

Transparency and scrutiny over algorithmic content selection are needed, but not sufficient. In its 2018 Report, the High-Level Expert Group on disinformation pointed to using appropriate indicators for news sources as inputs into algorithms.

Accordingly, Code of Practice signatories committed to trustworthiness indicators, to be elaborated with the news ecosystem. Two years have gone by, but these commitments remain largely disregarded.

So, what is required?

The notion of trust indicators has led to a number of misunderstandings, including fears that such indicators could translate in a single label ‘grading’ the quality of sources. Indeed, such an approach would not work.

First, each news outlet has its specific readership and orientation with its business models. For example, the quality of tabloid newspapers is variable, but they do subscribe to media standards. Second, labels are notoriously ineffective to influence users’ behaviour.

Indicators of trustworthiness should rather be seen as a variety of statistical signals into platforms’ systems, empowering users and thus increasing the quality of their interactions with online content. There might be small differences in the relative attention to different media brands, but overall media attention, and presumably advertising, would increase.

To truly empower users, there should not be a single source of trustworthiness signals. The goal should be a competitive environment for indicators, leaving users free to change, as they do with privacy settings. Indicators would be their Ariadne’s thread in the complex digital landscape.

Indicators providers (IPs) would be independent bodies, financed by a small levy on platforms using them, like auditors or financial ratings agencies.

While there may be country-wide IPs (not State-controlled) or global ones, Europe would be a relevant market for such bodies.  Criteria and common metrics could come from research triggered under the new European Digital Media Observatory.

There are already some promising initiatives. For instance, the Journalism Trust Initiative led by Reporters Without Borders would enable participating media to self-assess compliance with transparency and professional requirements. Other initiatives include the Trust Project, the Global Alliance for Responsible Media, the Global Disinformation Index , and more will come.

A virtuous loop for trust, quality and advertising revenue

Improved algorithms would not only decrease attention to fake news, but also generate more awareness amongst advertisers, creating a virtuous loop for revenue and quality.

This could be welcomed by the news media, for two reasons. Firstly, reflecting such indicators in algorithms would improve findability of professional news, boost readership and enhance brand loyalty, thereby increasing the value of ad placements.

Secondly, indicators would help demonetise disinformation websites and redirect some advertising revenues towards media outlets.

The integration of external indicators into algorithms would force platforms to adjust aspects of their business model, but the benefit for them would be to share responsibility for algorithmic design with a wider community.

EU institutions: Time to act, before legislating

The Democracy and Media Action Plans and the Digital Services Act should push these initiatives forward. In addition to general transparency obligations, mandating indicators now fits in this context.

The European Parliament showed the way in October with its resolution on the DSA and fundamental rightsMember states go in the same direction, looking at the overall information ecosystem.

Also, a year ago, an Open Letter to President von der Leyen, by parliamentarians and media experts, asked not to wait for new laws: under competition law, indicators could be a ‘must carry’ obligation for ‘systemic publishing platforms’. Like the Code of Practice, both action plans could anticipate the outcome of the DSA legislative process.

Let’s not miss the momentum on both fronts: less disinformation, and a better news ecosystem.

 

* Marc Sundermann (former Bertelsmann EU representative) and Christophe Leclercq (founder of EURACTIV) are respectively Senior Fellow and Chairman of Europe’s MediaLab. They cooperated as members of the EU’s High-Level Expert Group on disinformation. Paolo Cesarini is a former official with the European Commission. 

Share this article!
Share.

About Author

Leave A Reply