The Role of Data Protection Authorities as AI Supervisors


1.      Introduction

In the current European Union (EU) regulatory landscape regarding Artificial Intelligence (AI), with an AI Act compromise text whose agreement is not guaranteed,[1] nothing is said yet. Nevertheless, AI designers are not waiting for such a text or an analogous one to be approved to develop, test, sell and deploy their products. This has entailed a scenario where AI is widely in place in the EU, constantly interacting with the fundamental rights of the citizens subjected to it.[2] In such a scenario, one might ask: who is guaranteeing that fundamental rights are respected if there is no existing AI regulation that can be enforced?

2.      The General Data Protection Regulation and the Role of Data Protection Authorities as AI Supervisors

The General Data Protection Regulation (GDPR)[3] is currently one of the most used regulatory texts when dealing with the impact of AI on fundamental rights. This is because many AI applications base their outputs on the processing of personal data.[4] From predictive policing to CV screening, many AI applications are trained, tested and deployed grounded on the processing of large quantities of personal data. In this regard, the principles from Article 5 GDPR have been wielded as legal safeguards against disproportionate and unlawful use of AI-empowered systems.

While the GDPR allows its enforcement before Member States’ and European courts, it also contemplates the existence of national supervisory authorities (Data Protection Authorities, hereinafter DPAs) as well as a European Data Protection Supervisor (EDPS) and a European Data Protection Board (EDPB).[5] Such authorities have produced a vast output in the form of Decisions, Opinions, Reports or Blogposts where they informed and educated but also ruled on the lawfulness of controversial and contested uses of AI tools. Just to put an example with the use of Facial Recognition Technology (FRT) (an AI-empowered technology), almost all national DPAs, the EDPB and the EDPS have issued several outputs within the abovementioned formats on the functioning, use and lawfulness of such a technology.

As previously mentioned, judicial redress is also possible. However, up to this point, there are not many judicial decisions on the use of AI technology, neither at the national nor the EU level. To continue with the FRT example, there are no judicial decisions at the EU level on the topic and only two pieces of national case law before French courts.[6] It can be pointed out that time and an increasing spread will bring more case law to both national and EU courts on contested uses of AI but also the lack of expertise of judges will entail a challenge in this process.

3.      Whose legitimacy?

If DPAs are the current authorities, on the spot, deciding on the interference of AI’s use with the EU citizens’ fundamental rights, the next question is: are they legitimized to perform such a task?

From a legal perspective, as explained in the previous section, the GDPR allows them to intervene, as far as AI is processing personal data. Therefore, all AI systems which do not process personal data will not be supervised by DPAs. For all those systems which do process personal data, DPAs are competent as long as a breach of the GDPR occurs. However, DPAs are supervisory authorities, not judicial ones, and therefore, their powers and approaches, although sometimes confusingly overlap, are not the same. This could lead to a sense of uncertainty. However, as long as DPAs’ “rulings” might be subject to judicial review, in the end, there is a double layer of protection as far as the use of personal data-driven AI technologies is concerned.

From a more practical perspective, DPAs were not conceived by the GDPR to be the current deciding authorities on contested personal data-driven AI technologies. As a consequence, this might entail an additional and crucial burden for such authorities that, in several cases, are underfunded and underequipped to face such a challenging task.[7] Further, such a task has situated DPAs within the public eye. For instance, its well-known decision to ban ChatGPT (an AI application),[8] has granted a great deal of public recognition and notoriety to the Italian DPA. While this recognition might be a positive thing, making DPAs more accessible to the general public, it might also entail an additional burden of work for which some of them might not be prepared.

4.      Conclusion

Because of a great deal of AI applications whose outcomes come from the vast processing of personal data, DPAs are currently exercising a crucial role within the frontline of protecting fundamental rights against AI risks. While such a role might also be backed up by courts, up to the moment, DPAs and not judicial authorities are the current deciding authorities on most matters relating to personal data-driven AI. However, such a role might impose a toll on DPAs both regarding public opinion’s perception and workload. Only by properly equipping and ensuring that DPAs activity is developed according to the GDPR postulates, such a crucial role might be ensured.

*Natalia is a PhD candidate, at European University Institute; Research Assistant, Center for a Digital Society.

[1]  L. Bertuzzi, AI Act’s plenary vote cast with uncertainty as political deal crumbles, in, 7 June 2023.

[2] A. Simoncini – E. Longo, Fundamental Rights and the Rule of Law in the Algorithmic Society, in H. Micklitz, O. Pollicino, A. Reichman, A. Simoncini, G. Sartor, & G. De Gregorio (Eds.), Constitutional Challenges in the Algorithmic Society, Cambridge (UK), 2021, 27.

[3] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).

[4] S. Chatterjee – N.S. Sreenivasulu, Personal Data Sharing and Legal Issues of Human Rights in the Era of Artificial Intelligence: Moderating Effect of Government Regulation, in n. International Journal of Electronic Government Research, 15(3), 2019, 21-36.

[5] On Independent Supervisory Authorities, see Chapter 6 GDPR. On the right to an effective judicial remedy, see Articles 78 and 79 GDPR.

[6] Tribunal administratif de Marseille Case N°1901249, 27 February 2020 and Conseil d’Etat, Décision N° 442364, 26 April 2022. Outside of the European Union, see R (on the application of Bridges) v Chief Constable of South Wales Police ([2020] EWCA Civ 1058).

[7] European Data Protection Board, Overview on resources made available by Member States to the Data Protection Authorities and on enforcement actions by the Data Protection Authorities, 5 August 2021.

[8] Garante per la Protezione dei Dati Personali, Provvedimento del 30 marzo 2023 [9870832].

Share this article!

About Author

Leave A Reply