Regulating Deep Fakes in the Proposed AI Act

0

Angelica Fernandez, University of Luxembourg [1]

 

 

  1. Introduction

 

Since 2018, deep fake technology has been one of the areas in which artificial intelligence has evolved dramatically, and thus, governments primarily see deep fakes as an emerging threat. In particular, regulators are increasingly concerned by the developments and applications of this technology in two main areas: image-based sexual abuse and disinformation[2]. Despite its increasing popularity, there are challenges in defining what deep fakes are and what ought to be regulated when it comes to deep fake phenomena. For this reason, it is surprising that the EU Artificial Intelligence Act proposal (hereafter: AI Act)[3] refers to deep fakes and includes them under its scope in a first attempt to regulate the phenomenon at the EU level. There are concerns about the proposed approach, and the enforcement of a new transparency obligation is one of them.

  1. A new transparency obligation for deep fakes

Deep fake systems are subjected to specific transparency obligations in Article 52(3) of the proposed AI Act. Under the AI Act’s risk-based approach to AI systems, deep fakes are considered by the European Commission as limited risk AI systems applications. This provision aims to protect natural persons from the risks of impersonation or deception when an AI system generates or manipulates image, audio, or video content that appreciably resembles existing persons, places, or events and would falsely appear to a user of this system to be authentic or truthful.

Against the background of divergent interpretations of what constitutes an AI system and, in particular, of what deep fakes are, this attempt to regulate is surprising. At first glance, there seems to be a consensus among scholars, companies working with deep fakes, and media outlets over two elements that define deep fakes: (i) the use of AI-based technology and (ii) the intent to deceive. However, there are practical challenges to this seemingly consensual definition, particularly when drawing boundaries between deep fakes and lower audio-visual manipulation (i.e., cheap fakes)[4]. A patchwork of applicable provisions, ranging from privacy laws to copyright, is currently used by lawyers trying to litigate cases involving deep fake technology without too much success. While many factors contribute to this situation, the lack of clarity on what constitutes a deep fake is one of them. Getting the scope of the definition right is essential to address appropriately the distinct harm profile stemming from deep fake technology, specifically concerning image-based sexual abuse and disinformation. Equally important, taking account of these practical challenges in determining the scope will enhance the enforceability of this provision.

Overcoming these challenges is particularly important because, first, the use of such technologies has a gendered dimension since it disproportionately affects women who are the target of approximately 90% of deep fakes in the form of non-consensual fake pornography. Second, it is difficult for users and victims of the malicious uses of deep fake technology to find redress to their harm. Therefore, enforcement is key in regulating deep fakes because it harbors hope for victims seeking redress.

  1. Enforcement issues of the transparency obligations in Article 52(3) AI Act

However, the current provision of the AI Act does not seem to clarify the situation for regular users that may be harmed by deep fake technology.

By classifying deep fake technologies as limited risk systems and by imposing only minimal transparency requirements, the AI Act does not encompass this obligation with an explicit sanction on non-compliance. Consequently, there are no strong incentives for compliance with this transparency rule. Moreover, if a market surveillance authority of a Member State was to find an AI system presenting a risk, Article 67 of the AI Act would require a procedure of re-assessing the risk of this AI system and reclassifying it to harden the obligation applicable to it. This means that the only risk mitigation available for this prima facie limited risk AI system is by amending Annex III of the proposed AI Act. Annex III contains a list of high-risk AI applications, and only the systems listed therein are subjected to the vast majority of the Act’s requirements. However, the dual uses[5] of deep fake technology and the intrinsic link between detection and creation systems pose challenges in making these complicated risk assessment distinctions[6]. For example, it must be noted that deep fakes detection systems are listed as high-risk systems in Annex III of the proposed AI Act, while all other uses are considered as having a limited risk.

Problematic deep fake products have already been deployed in the market, as it was the popular case of the DeepNude app in the US, which undressed photos of women creating a deep fake of their bodies. DeepNude was eventually taken out of the market, but similar applications are currently working, even in dedicated Telegram and WhatsApp channels offering the same type of services[7]. Suppose there were concerns with deploying and applying a particular deep fake product or system in the EU. In that case, the process for changing its risk level from limited to high risk does not seem to be user-friendly as this procedure needs to be led by the European Commission. Moreover, the market surveillance authority, which has an essential role in compliance with the proposed AI Act, does not seem to be versed in the media forensic expertise needed to conduct investigations in this area. Deep fakes are a complex issue likely far from the regular activities carried until now by these authorities. Additionally, it is unclear who will enforce this transparency obligation, which raises the crucial question about who gets to decide which content falls under these categories. Article 52(3) provides for some exceptions. However, divergences in interpreting these exceptions will likely allow many deep fakes to remain unlabeled and thus unregulated.

Moreover, Article 69 of the proposed AI Act seeks to promote codes of conduct as means to achieve voluntary compliance with requirements in the Act for other than high-risk AI systems. Therefore, enforcement most likely will depend entirely on the voluntary commitment of platforms (or any other online content dissemination channel) to abide by this proposed approach. However, current platforms’ efforts in countering malicious uses of deep fakes have focused on improving detection[8]. Relying only on self-regulation increases the risk that this over-attention on detection might skew legislative proposals on other types of solutions that could effectively deter creation and dissemination and ultimately significantly improve the current legal solutions for addressing victims’ harms.

Furthermore, this type of transparency obligation requirement for online content is not new in the toolkit of the EU legislator. A recent example of a similar requirement is found in the disclosure and labeling obligations in disinformation strategies concerning political advertising and coronavirus disinformation, implemented through the EU Code of Practice on Disinformation from 2018 to 2021. In their final assessment of the code, the European Regulators Group for Audiovisual Media Services (ERGA) confirmed the preference of industry stakeholders, namely online platforms, for this type of measure while evidencing the challenges in its enforcement[9]. Among the main lessons from the application of the Code of Practice on Disinformation, it is clear that labels alone are not an effective measure to counter disinformation or deter its creation and dissemination. This likely applies to manipulated content as well, though deep fakes were not mentioned in this first version of the code. However, given the enforcement challenges raised by ERGA concerning these voluntary measures, it is questionable whether imposing a transparency requirement is a proportionate response to the harms lived by the victims and whether this type of measure helps lawyers in litigating these cases. A new iteration of the Code of Practice on Disinformation is expected to be published in March 2022. This “strengthened Code should consider the transparency obligations for AI systems that generate or manipulate content and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act”[10].

Finally, deep fakes deserve regulatory attention given their exponential rise and the deep and personal harm they inflict on a primarily vulnerable population. Therefore, as a first attempt to regulate deep fakes, the European Commission leaps into the future by including deep fakes into the proposed AI Act. Even though the current enforcement set up for regulating deep fakes under the proposed AI Act is ambiguous and raises many questions, there is still time in the ongoing discussion to improve these aspects. More generally, if users are to rely solely on labels to weigh whether they are interacting with manipulated media, more research is needed into effective design since newer forms of enhancing transparency are available but not necessarily implemented by companies. This type of transparency obligation should not hinder the legitimate uses of deep fake technology, which is another aspect that remains to be clarified in the proposal.

 

 

[1]Angelica Fernandez is a PhD candidate at University of Luxembourg supported by FNR. This blogpost is based on a forthcoming article ““Deep fakes”: disentangling terms in the proposed EU Artificial Intelligence Act” to be published in UFITA 2/2021, DOI:10.5771/2568-9185-2021-2-386.

[2] See for a comprehensive background on the state of deep fakes in the EU: European Parliamentary Research Service Scientific Foresight Unit (STOA), Tackling Deepfakes in European Policy, Luxembourg, 2021, PE 690.039.

[3] Proposal for a Regulation of the European Parliament and of the Council Laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. COM(2021) 206 final.

[4] See B. Paris – J. Donovan, Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence, in  datasociety.net, 2019.

[5] “Dual uses” terminology is commonly used regarding technology that has both military and civil or commercial uses. In this case, it refers to technology that simultaneously achieves legitimate commercial goals, as in the case of developing voice deep fakes for e-learning platforms or video games, as well as malicious objectives, like voice deep fakes for telephone scams or committing fraud.

[6] Computer sciences literature often refers to the adversarial nature of deep fake technology. In many cases, this means that systems that detect deep fake compete with systems that create them to improve.

[7] See K. Hao, An AI App That “Undressed” Women Shows How Deepfakes Harm the Most Vulnerable, in technologyreview.com; G. Patrini, Automating Image Abuse: Deepfake Bots on Telegram, in giorgiop.github.io, 20 October 2020.

[8] See for example Facebook Deep fake Detection Challenge initiative. C. Canton Ferrer et al., Deepfake Detection Challenge Results: An open initiative to advance AI, in ai.facebook.com, 12 June 2020.

[9] See for details on the assessment ERGA, ERGA Report on Disinformation: Assessment of the Implementation of the Code of Practice, in erga-online.eu, 2020.

[10] European Commission, Guidance to Strengthen Code of Practice on Disinformation, in ec.europa.eu, 26 May 2021.

Share this article!
Share.

About Author

Leave A Reply