Questions and Challenges of AI Generative Models


In 1992, Black Sabbath released “Dehumanizer,” their sixteenth studio album, featuring the track “Computer God” as the opening song. The lyrics of the song discuss the social consequences of the overwhelming power of computers and AI, with lines such as “Computerize God, it’s the new religion.”

Fast forward thirty years, an open letter has been published by Elon Musk, Steve Wozniak, Yuval Noah Harari, and other intellectuals, entrepreneurs, and experts, calling for a halt to the development of Artificial Intelligence (AI) systems. The letter expresses concerns about the dangers of machines flooding information channels with propaganda and untruths, automating away jobs, developing non-human minds that could eventually replace humans, and risking the loss of control of civilization.

The letter specifically appeals to all laboratories training AI systems to suspend the implementation of more powerful systems like GPT-4 for at least six months. During this pause, efforts should be focused on defining advanced security protocols in cooperation with institutions to ensure these systems are entirely safe. The letter also calls for the establishment of new regulatory authorities to oversee the development of AI.

The clarity of the positions expressed in the letter, as well as the diversity of its signatories, cannot be ignored. It raises multiple questions, including whether public authorities can govern such a phenomenon, what tools should be used, and how compliance with any measures or laws that may be passed can be ensured. The latter point is becoming a central theme in the relationship between technology and law.

In recent days, the Italian Data Protection Authority has issued an urgent measure against OpenAI, limiting the temporary processing of personal data for Italian users regarding ChatGPT, the most well-known relational AI software. The challenges to the measure are primarily based on the lack of user information, the absence of legal basis for the massive collection and storage of personal data, the inaccuracy of personal data included in the system’s outputs, and the lack of any filter to verify the age of users under 14 years old.

OpenAI’s response was to make the service no longer accessible from Italy and announce a refund of the subscription paid by “pro” users. It is not the place to discuss the possible defects of the measure, although it is somewhat perplexing that the urgency behind its adoption is not motivated in any way, and the “phone verification” system that OpenAI implements for ChatGPT registration has not been fully scrutinized (in Italy, to own a SIM card, one must be 15 years old).

What is of greater interest, however, is the lack of a legal basis justifying the collection of personal data to “train” the algorithms underlying the platform’s functions. This issue raises significant questions about the legal framework for the development and use of AI and the potential risks to privacy and personal data protection.

Without delving into whether such a basis exists or not in the case at hand, it is undeniable that the rigidity of the GDPR approach (amplified by often restrictive interpretations by the EDPB and the EU Court of Justice) represents a problem for which two solutions can be envisaged:

  1. The Artificial Intelligence Act, currently under discussion, could have been the ideal legislative instrument to insert ad hoc legal bases for the processing of data in the context of artificial intelligence systems. Unfortunately, this has not been done: it is a serious gap that could still be filled, thus making that legislation truly “future-proof”;
  2. If ad hoc legal bases are not to be provided, then it is necessary to broaden – even interpretively – the scope of application of the “legitimate interest” referred to in Article 6, letter f of the GDPR. This would also allow for the safeguarding of other types of data processing (such as targeted advertising), where consent is incorrectly deemed to be the “golden rule” and other legal bases could equally apply given the protections required under the GDPR.

This is a path that must be undertaken urgently.

That being said, any normative or regulatory instrument that is developed to govern artificial intelligence systems cannot ignore the responsibility (including cultural responsibility) of the individual.

Perhaps the question that should be asked is: are we certain that the security of these systems can be guaranteed through the production of laws, regulations, and guidelines, whose correct application would in any case be slow and certainly difficult to ensure? Is the legal instrument still the most powerful we can imagine governing change? Should we only focus our attention on laws, regulations, and guidelines, instead of strengthening the education of human beings – as well as machines – to ensure that they are provided with the cultural and critical tools to be conscious users of new means, rather than being victims of them?

Returning to the initial musical reference, to the 1992 “Computer God,” many years later in 2013, “God is dead?” followed: this is the regulatory question that will accompany us in the coming years.

Share this article!

About Author

Leave A Reply