«How worried should you be?» is asked in an article recently published in The Economist.
Artificial Intelligence is one the most discussed legal and tech topics of the last few years. The scholarly debate and the discussions about policy strategies have highlighted the potential risks, benefits, and challenges of this technology, pointing out also the critical intersections with the enforcement and translation of legal principles throughout AI. Information, environment, privacy, data protection, and intellectual property: indeed, implementing and using AI systems somewhat touches these legal and social elements.
One of the primary challenges posed by AI is the potential for bias and discrimination in decision-making algorithms. As AI systems become more sophisticated and are given greater autonomy, they may perpetuate and exacerbate existing social inequalities. For example, facial recognition technology has been shown to be less accurate in identifying people with darker skin tones, leading to potential racial profiling and discrimination.
Another challenge is the need to balance privacy and security concerns with the potential benefits of AI. AI systems are capable of processing vast amounts of personal data, raising concerns about data privacy and the potential for surveillance. At the same time, AI can be used to enhance security measures and prevent crime, highlighting the need for careful regulation and oversight.
Additionally, the rise of AI is challenging traditional notions of accountability and responsibility. When an AI system causes harm, it may be difficult to assign responsibility to any one individual or entity, leading to potential legal and ethical dilemmas. This is particularly true in cases where AI systems are given a high degree of autonomy, such as in the case of self-driving cars.
To address these and other trans-national challenges posed by AI, there is a need for coordinated international efforts. This includes the development of common standards and guidelines for the development and use of AI, as well as the establishment of international bodies to oversee and regulate AI technology. It also requires collaboration between governments, industry, and civil society to ensure that the benefits of AI are realized while minimizing its negative impacts.
The adoption of a trans-national approach and analysis of the issues around AI governance is paramount in light of the ever-increasing globalization of the technology market and of the application of AI systems, as well as in view of the global reach of data used for training and adjourning those systems themselves. Balkanized approaches to AI may thus ultimately lead to suffocating the market and the technological progress. At the same time, they might also affect the overall protection of fundamental rights and human rights in the globalized context.
Even in Europe, the governance of AI risks suffering from the parallel, and not always consistent nor coherent, proposal of two legal instruments: on the one hand, the EU Regulation proposal for the AI Act; on the other hand, the drafting, of a Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law by the Committee on Artificial Intelligence (CAI) within the framework of the Council of Europe. In this respect, failure to coordinate between the two legal projects could lead to significant inconsistencies and severe consequences in the AI market.
In conclusion, the rapid advancement of AI is posing significant challenges to law and society that require transnational responses. These challenges include issues related to bias and discrimination, privacy and security, and accountability and responsibility. Addressing these challenges will require coordinated international efforts to ensure that AI is developed and used in a way that benefits everyone.
Against this backdrop, MediaLaws wishes to collect innovative contributions exploring the many trans-national challenges brought up by the rise and spread of AI systems across the market. Contributions may explore research questions such as (but not limited to) the following:
- What impact will the AI Act have on the trans-national governance of AI systems?
- How is the law responding, both in Europe and out of Europe, to the spread of these new technologies?
- Is the EU (current and future) legal framework effective in tackling trans-national challenges of AI?
- How to foster the protection of fundamental human rights and constitutional principles vis-à-vis the rise of AI in a global context?
- How can the EU and Council of Europe’s systems be brought to unity?
Will the global race to the regulation of AI bring an improvement to the overall AI governance system? Or will it be detrimental to the market and to the guarantee of fundamental rights?
Please, follow the submission guidelines hereby linked.