Navigating the Transatlantic AI Landscape: The EU Artificial Intelligence Act and its Ripple Effect on the US

The op-ed explores the impact of the European Union’s Artificial Intelligence Act (AIA) on the US digital market. The AIA­ – aimed at establishing ethical and responsible AI governance – has sparked debate, particularly in global tech circles. The article delves into three key areas of concern for US firms: challenges accessing and servicing the EU market, disclosures of proprietary information, and constraints on innovation involving transatlantic cooperation. It argues that while the AIA promotes a human-centered approach and addresses ethical concerns, its stringent regulations may inadvertently impede innovation, hinder global collaboration, and create regulatory disparities between the EU and the US. Finally, it suggests that finding a balance between addressing AI-related risks and fostering an environment conducive to technological advancement is crucial for sustaining growth and collaboration in the global tech industry.

 

Summary: 1. Introduction. – 2. Challenges accessing and servicing the EU market. – 3. Disclosures of proprietary information. – 4. Constraints on innovation involving transatlantic cooperation.

 

  1. Introduction

«Regulation is taking over innovation». This phrase has usually been used to resist the overarching regulatory power of the EU in a rapidly evolving global digital landscape. The European Union’s recent move to regulate those technologies under the umbrella term of AI[1] through the Artificial Intelligence Act (AIA) has stirred significant debate, especially in global tech circles[2]. The AIA marks a stride towards ethical and responsible AI governance or, at minimum, their responsible use. Yet, it is worth asking what impact the AIA would have on the US digital market and how. While the EU aims to regulate AI technologies to ensure ethical and transparent practices, the implications of the AIA could be detrimental within the United States market. Among the limitations potentially faced by US firms are challenges accessing and servicing the EU market, disclosures of proprietary information, and constraints on innovation involving transatlantic cooperation. The protections covered by the AIA include issues that are not exclusive to European regulators but are shared by legislators across the globe[3], followed by different policy approaches[4]. However, the AIA might have a prominent impact on the US digital market to some extent, forcing firms to limit their market size rather than encouraging frameworks for cooperation.

The AIA, proposed by the European Commission, aims to establish a comprehensive framework for developing and deploying AI within the EU. Along with this complex regulatory process[5], one of the crucial scenarios emerging from this Act is the emergence of a regulatory dissonance between the EU and the US. With the US being a key player in the tech industry, member states were concerned that differing regulatory frameworks could hinder innovation and collaboration[6]. Nevertheless, following EU trilogue negotiations, the AIA reached a political agreement on December 8, 2023[7].

Although there is no definitive text of the AIA, there was consensus among the AIA objectives, which include fostering innovation, protecting fundamental rights, and ensuring the safety and accountability of AI systems[8]. According to these objectives, the Act fits a human-centered approach[9]. While critics argue that such regulation might stifle technological advancement, proponents assert that addressing the increasing ethical concerns surrounding AI is necessary. A commitment to EU values implies imposing restrictions «to protect human dignity, data privacy, democratic discourse, or other core rights of European digital citizens»[10].

This approach is evident from AIA’s stringent requirements, which depend on a risk-based categorization wherein the risk of harm to human rights and privacy is prioritized. As a result, the AIA is expected to cover four categories of risks: prohibited (or unacceptable) risk AI systems, high-risk systems, limited (or specific transparency) risks, and minimal-risk systems[11].

 

  1. Challenges accessing and servicing the EU market

Following a precautionary principle, the AIA denies market access to those technologies that will cause direct or indirect physical or psychological harm, specifically where the interventions to mitigate risks would be inadequate or ineffective (prohibited AI systems). The remaining technologies will have market access only if they comply with a series of more or less stringent requirements, primarily disclosure requirements. In other words, the regulation will impact the commercialization of these systems, leaving the door open for their mere development.

The AIA is primarily focused on how tech companies should conduct their business in compliance with a human-centered approach. The question then remains on what a human-centered approach entails and whether it can be reconciled with a for-profit business mindset. The answer is nothing but linear. While the intent is to safeguard against potential harm, stringent regulations may deter entrepreneurs and startups from venturing into high-risk, high-reward areas. This initial problem was pointed out by commentators, indicating that the Act mischaracterizes AI technologies, implying that all of them have a specific purpose[12]. This challenge could be partially resolved after criticism arose during negotiations that highlighted the inappropriateness of the Act in regulating certain areas, such as foundation models – a form of generative AI – usually left to industry self-regulation and standards[13]. If the Act is left unmodified, it could lead to an over-inclusiveness of crucial technologies. Instead, as these AI models get shaped over time, regulation should focus on how these technologies are used in practice with a concrete approach[14]. Regulation over foundational models can have the effect that US companies (Big Tech) would be able to comply while EU startups working on these models might struggle.

On the contrary, the strong regulatory approach indicates that the EU is not interested in fostering competition since it might imply that, as the US is a leading market for technologies, the only way to arrest the competitive race (where the EU is not invited)[15] is by circumscribing the resources that make them big. In this sense, it has been argued that a way to limit these resources is through data protectionism[16]. In addition, most of the AIA wording is vague, as if the battle is between humans and algorithms and not between AI systems and their effects on humans. The choice of regulatory vagueness introduces another type of risk firms must face: increasing litigation costs. Only through litigation would firms resort to courts to define the level of sufficiency of AI systems concerning their error rate, which, at the moment, is unclarified. This approach may discourage US tech companies from expanding into the EU market or engaging in joint ventures.

The challenges US firms face in accessing and servicing the European market arise from a fragmented global tech landscape, limiting the collaborative potential that comes with shared technological advancements. More so since, in the AI scenario, big tech is investing more in AI startups than venture capitalists[17]. As (institutional) investors of AI startups, big tech companies may become AI system providers and access their users’ business practices and strategies. This mechanism opens known competition issues of abuse of dominance in the EU panorama, but these issues have not been explored in the AIA[18].

 

  1. Disclosures of proprietary information

Disclosures involve all types of AI models under the EU regulatory classification. Even the prohibited AI systems that fall within one of the exceptions must deliver information regarding data quality for transparency purposes[19]. The emphasis on data quality shows the proposal’s inability to fully understand how this technology works. The common mistake of substituting data quality with reliability or trustworthiness implies that there is a distinction between “good” and “bad” data when, in reality, data are merely facts. The problem instead should shift towards how data are classified, the scope and purpose of the said classification, and its interplay with the model. Against the “good data” argument, scholars have shown how this common misconception of algorithms – or a set of finite instructions established to produce an output – is commonly perceived as something magical or an oracle for the future[20]. Algorithms are models that are humanly constructed and not divination, even when they have predictive capacities. To give an example of how bad the current understanding of AI models is, generative models such as ChatGPT do not allow access to a database to answer questions. This might surprise many even more since the debate around good or bad data fails in this case[21]. Nevertheless, regulators could require firms to disclose whether there are good or bad data (a program working with data that claims to obtain a specific result must have “good data”) and the parameters used to make them work.

While transparency is essential, if the AIA requirements outlined move toward disclosure of “parameters” of the model, the Act may force companies to disclose proprietary information, potentially compromising their competitive edge. These disclosures could dissuade US tech giants from fully embracing the EU market or limit the scope of their operations for fear of revealing too much about their technological advancements and making it challenging to develop complex AI systems.

 

  1. Constraints on innovation involving transatlantic cooperation

The extraterritorial nature of the AIA means that any company, regardless of its location, offering AI products or services within the EU or targeting EU citizens is bound by these regulations. Extraterritoriality has immediate implications for US tech giants with a significant European presence. Companies like Google, Amazon, and Microsoft must align their practices with EU standards, potentially triggering a ripple effect that influences global AI standards[22].

For some contenders of regulation, the AIA’s cautious approach could result in missed opportunities for groundbreaking innovations, placing the EU in a disadvantaged position in the global tech race. Nevertheless, there is no correlation between the EU being at the competitive levels of other geographical areas, such as the US and China, and increasing regulation. The concern about whether the EU would lose competitive market share in innovation is limited, primarily because the EU market was never able to attract innovation even before regulatory intervention. There are also some compelling reasons why the EU has not succeeded. Among the reasons are punitive bankruptcy laws, a risk-aversion culture, and the inability to attract human capital in the tech ecosystem[23].

Finally, transatlantic cooperation between the EU and the US could prevent China’s leadership in the tech realm[24]. If Chinese tech giants continue expanding, it could impact the significance of European values of democracy. It is desirable that the EU’s regulatory push could also spur positive developments within the United States. As they both grapple with the ethical dimensions of AI, there is an opportunity for collaborative efforts to establish a harmonized global framework. A shared commitment to ethical AI principles can foster international cooperation, benefiting transatlantic relations and the global tech ecosystem.

In conclusion, while the European Union’s Artificial Intelligence Act reflects a commendable effort to ensure ethical AI development with a human-centered approach, its potential impact on the US market raises valid concerns. The risk-based approach, stringent regulations, and punitive measures outlined in the AIA may inadvertently hinder innovation, disrupt global collaboration, and create regulatory disparities between the EU and the US. Striking a balance between safeguarding against AI-related risks and fostering an environment conducive to technological advancement is crucial for ensuring continued growth and collaboration within the global tech industry.

The question then is not whether regulation stifles healthy competition and innovation but how regulation can foster a free flow of ideas and technological advances in the EU.

* The author is a Max Weber Fellow in Law at the European University Institute. This comment was drafted with the support of the South EU Data Governance Chair during my fellowship at Roma Tre University.

[1] AI is considered an umbrella term because it encompasses an added number of technologies spanning from Deep Learning to Speech Recognition. Most technologies are supportive of others.

[2] P. Davies, ‘Potentially disastrous’ for innovation: Tech sector reacts to the EUI AI Act saying it goes too far, in Euronews.next, 15 December 2023.

[3] A report drafted by a US organization, AI Now, and signed by over fifty institutional and individual experts warned about the issues of General Purpose AI (or Foundational Models). AI Now Institute, Policy Brief, General Purpose AI Poses Serious Risks, Should Not be Excluded From the EU’s AI Act, in AINOW, 13 April 2023.

[4] K. Werbach, Is the US Really Behind on AI Policy? Part I, in The Road to Accountable AI, 22 January 2024.

[5] This process started in 2018 with an expert group appointed by the EU Commission to draft a proposal for guidelines on AI ethics. EU Commission, Press Release N. IP-18-1381, Artificial Intelligence: Commission kicks off work on marrying cutting-edge technology and ethical standards, 9 March 2018. From that initial working group, several steps over the span of five years involving regulators, the industry, and stakeholders led to a political agreement of the Commission’s proposal by co-legislators: the European Parliament (representing EU citizens) and the Council (representing governments of EU states), altogether the “trilogue negotiations”. A. Gesser, M. Kelly, M. Hirst, S.J. Allaman, M. Muse, S. Thomson, The EU AI Act—Navigating the EU’s Legislative Labyrinth, in Debevoise & Plimpton, 29 November 2023.

[6] W. Henshall, E.U.’s AI Regulation Could Be Softened After Pushback From Biggest Members, in Time, 22 November 2023.

[7] European Commission, Press Release (Ref. IP/23/6473), Commission welcomes political agreement on Artificial Intelligence Act, 9 December 2023.

[8] European Parliament, Press Release (Ref. 20231206IPR15699), Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI, 9 December 2023.

[9] European approach to artificial intelligence, Shaping Europe’s digital future, 2023.

[10] A. Bradford, Digital Empires: The Global Battle to Regulate Technology, New York, 2023, 105.

[11] Proposal for a Regulation of the European Parliament and of the Council. Laying Down Harmonized Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (COM/2021/206 final).

[12] M. Almada – N. Petit, The EU AI Act: A Medley of Product Safety and Fundamental Rights?, in Robert Schuman Centre for Advanced Studies Research Paper, 59, 2023.

[13] L. Bertuzzi, EU’s AI Act negotiations hit the brakes over foundation models, in Euractiv, 10 November 2023.

[14] Ministry of Enterprises and Made in Italy, Italy, Germany and France agree on strengthening their cooperation on Artificial Intelligence, in Mimit.gov: News.

[15] The EU possesses a 4% capitalization of the 70 largest platforms, while the US has 73% and China 18%. The EU wants to set the rules for the world of technology, in The Economist, 20 February 2020.

[16] V. Zeno-Zencovich, Data protection[ism], in MediaLaws – Rivista di diritto dei media, 2, 2022, 11 ss.

[17] G. Hammond, Big Tech is spending more than VC firms on AI startups, in Arstechnica, 27 December 2023.

[18] T. Schrepel, Decoding the AI Act: A Critical Guide for Competition Experts, in Amsterdam Law & Technology Institute ­– Working Paper 3-2023 // Dynamic Competition Initiative – Working Paper 4-2023, 11.

[19] An exception to prohibited systems is the use of biometric identification for law enforcement purposes in public spaces, but only subject to judicial authorization and not in real-time. When used in real-time, these systems must be used for compelling interests at stake, such as criminal activities involving sexual exploitation, terrorism, rape, and other narrowly defined purposes. European Parliament, Press Release (Ref. 20231206IPR15699), Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI, 9 December 2023.

[20] E. Tucker, Artifice and Intelligence, in Medium: Georgetown Center on Privacy and Technology, 8 March 2022.

[21] L. Floridi, AI as Agency Without Intelligence: on ChatGPT, Large Language Models, and Other Generative Models, in Philosophy & Technology, 2023, 1 ss.

[22] A prime example of the European rights-driven regulatory model started with the GDPR, highlighting what is known as the Brussels effect, where the standards imposed by the European regulator had a direct effect on EU consumers. A. Bradford, The Brussels Effect: How the European Union Rules the World, New York, 2020. This effect extended indirectly to other consumers by raising the standards of the digital platforms ‘Terms of Use’ to align with the EU requirements in the United States regardless of whether the target was and thus, even if the firm and the target (consumer) were within domestic boundaries. K.E. Davis – F. Marotta-Wurgler, Contracting for Personal Data, in New York University Law Review, 4, 2019, 662 ss.

[23] A. Bradford, The False Choice Between Digital Regulation and Innovation, in ssrn.com, 2024.

[24] The EU wants to become the world’s super-regulator in AI. And there’s precedent, in The Economist, 24 April 2021.

Share this article!
Share.