Recalibrating Platforms’ AI Systems: EU advances

0
  1. Introduction

Recommending digital content is a prominent everyday application of AI. Social media platforms such as YouTube, Facebook, and TikTok use adaptable, automated systems to personalise content. So do search engines, online marketplaces, and multimedia streaming services like Google, Amazon, or Spotify.[1] Users receive advertisements, content, and search results based on data gathered on them and users with similar characteristics (content recommendation). Large social media platforms. Moreover, use automated systems to remove content that infringes laws or terms of service (content moderation) and handle subsequent complaints.[2] These systems have raised several concerns for fundamental rights and democratic debate. An example is the proliferation of biases through discriminatory content targeting, such as in the targeting of job advertisements by gender.[3] Another concern is their addictive nature aimed at maximising usage time.[4] This raises demands for accountability regarding the design and functioning of AI systems on online platforms.

The discussion is transnational by nature of the corporations concerned. The major US platforms dominating EU markets – most prominently Google and Meta – are widely used in Brazil, India, Indonesia, Mexico, Pakistan, and the Philippines among others.[5] This post focuses on the potential of the EU Digital Services Act (DSA)[6] and proposed Artificial Intelligence Act (AI Act)[7] to recalibrate the content management of social media platforms and search engines. It particularly explores disclosure and risk analysis obligations.

 

  1. Layers of Legal Protection

The novel core EU legislation to govern the management of content is the Digital Services Act (DSA). It introduces obligations on disclosures, the removal of content illegal under Union or national law, redress procedures, and the management of systemic risks. While not explicitly mentioning AI, the DSA includes several references to the use of algorithms and automated systems. Art 14 and 27 DSA oblige online platforms to inform on algorithmic content moderation and recommender systems in their terms of service. Further obligations apply to very large online platforms and search engines with more than 45 million monthly active EU users (VLOPs and VLOSEs). In their analysis of risks to fundamental rights, public discourse, public security, and human health, they must particularly consider algorithmic content management systems (Art 34 DSA). However, the choice of risk mitigation measures lies with VLOPs and VLOSEs. The DSA inter alia suggests the testing and adapting of algorithmic systems (Art 35). Platforms must disclose the results of this risk analysis to auditors and the Commission and, with multiple exceptions for security reasons, publish them (Art 42). If necessary for the monitoring of DSA compliance, VLOPs and VLOSEs must explain the design, functioning, and testing of their algorithmic systems to the Commission and national Digital Services Coordinators (Art 40) and enable the former to access systems and databases (Art 72). Within the Commission, the European Centre for Algorithmic Transparency[8] shall assist in the supervision and enforcement.

The proposed AI Act holds the potential to add another layer of protection. While Art 2(5) AI Act gives precedence to DSA rules that limit the liability of intermediary services for user content, this does not regard platforms’ own AI systems for content management. However, these systems are not currently included in the list of high-risk AI systems in Anex III(1)(5) AI Act. The Commission may add them by delegated Act (Art 7). One may well argue this given their central role as fora for public debate and trade. It renders them «essential private and public services necessary for people to fully participate in society or to improve one’s standard of living», equivalent to the telecommunications systems that are mentioned in Recital 37 of the proposal.[9] This categorisation as high-risk systems would inter alia require platforms to conduct risk assessment and mitigation, ensure the quality of training and testing data, inform users on the functioning and limitations of the system, and ensure human oversight, robustness, and accuracy.[10] This provides the Commission with an opportunity to strengthen the obligations should DSA transparency and risk analysis not go far enough, for example as platforms select other measures than adaptations to AI systems in DSA risk analysis. If platforms’ AI systems remain however categorised as posing minimal or limited risk, platforms only need to fulfil specific obligations to inform users when they interact with AI chatbots or are subject to emotional recognition systems (Art 52).

As it stands, the DSA offers the EU the first chance to ensure that social media platforms and search engines recalibrate AI systems with regards to fundamental rights and democratic debate. However, some fear it remains too vague and procedural to generate substantive changes in platforms’ systems.[11] This particularly regards the risk analysis obligations.[12] Whether this comes true will also depend on the Commission’s enforcement of the Act. The AI Act offers further potential. This renders it crucial to independently monitor the Acts’ practical effects and the Commission’s stance in implementing them. The key question is if and how platforms increase safeguards for constitutionally protected values in the use of AI systems, even if they are costly.

  1. Transnational Take-Aways

DSA and AI Act interact with transnational regulatory debate in multiple ways. Within Europe, the AI Act will receive its counterpart in the currently negotiated Council of Europe Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. The draft Convention is broader than the AI Act in several aspects, particularly foreseeing obligations for all AI systems regardless of risk categories.[13] Moreover, it is open to the accession of non-Member States. Transnationally, DSA practices are expected to reach beyond the Union as transnational platforms may adhere to many rules globally and other legislators take inspiration from the law.[14] Resultingly, a conundrum of regulation may govern the use of AI systems on social media platforms – with the DSA as the frontrunner.

DSA and AI Act are open-ended attempts rather than undisputed endpoints of regulatory debate. Much is to be learnt from their implementation in practice.[15] This view enables further and alternative advances to hold platforms accountable should they in practice not remedy underlying concerns. Beyond the EU, they can be substantiated by learnings stemming from DSA disclosure and risk analysis obligations.

[1] See University of Helsinki, Elements of AI, in course.elementsofai.com for a technical introduction.

[2] See for example Meta, How Meta prioritises content for review, in transparency.fb.com, 26 January 2022; R. Darbinyan, The Growing Role Of AI In Content Moderation, in forbes.com, 14 June 2022.

[3] C. Duffy – C. Dotto, People are missing out on job opportunities on Facebook because of gender, research suggests, in edition.cnn.com, 12 June 2023.

[4] M. McCluskey, How Addictive Social Media Algorithms Could Finally Face a Reckoning in 2022, in time.com, 4 January 2022.

[5] See DataReportal – We are Social and Meltwater, Leading Countries Based on YouTube Audience Size as of January 2023 (in Millions), in statista.com, 6 February 2023; DataReportal – We are Social and Meltwater, Leading Countries Based on Facebook Audience Size as of January 2023 (in Millions) [Graph], in statista.com, 24 February 2023.

[6] Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act) 2022 (OJ L 277).

[7] Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts 2021 (COM/2021/206 final).

[8] European Commission, European Centre for Algorithmic Transparency, in algorithmic-transparency.ec.europa.eu.

[9] See E. Cremona, I servizi privati essenziali nella proposta di AI Act: prime considerazioni, in medialaws.eu, 8 March 2022.

[10] Chapter 2 AI Act.

[11] See for example S. B. Castellaro – J. Penfrat, The DSA fails to reign in the most harmful digital platform businesses – but it is still useful, in verfassungsblog.de, 8 November 2022.

[12] A. Peukert, Five Reasons to be Sceptical About the DSA, in verfassungsblog.de, 31 August 2021.

[13] Council of Europe, revised “Zero Draft” [Framework] Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law.

[14] D. Keller, The EU’s new Digital Services Act and the Rest of the World, in verfassungsblog.de, 7 November 2022; See also A. Bradford, The Brussels Effect: How the European Union Rules the World, Oxford, 2019, 131.

[15] J. van Hoboken and others, The DSA has been published – now the difficult bit begins, in verfassungsblog.de, 31 October 2022.

Share this article!
Share.

About Author

Leave A Reply