Stormy weather: cybersecurity and the new AI Act

0

1. Introduction

As everyone knows, the Council of the European Union, the European Commission and the European Parliament are working hard these days to agree on a final text for the AI Act[1], which, once approved, will establish the EU landmark regulation on artificial intelligence. Many contentious and controversial issues are still being debated, in an effort to correctly strike the balance between the need to establish rules that protect critical European values without hindering progress. However, the trilogue negotiations[2] among the EU institutions do not seem to touch upon the new cybersecurity obligations imposed by the draft AI Act. Hence, rules on cybersecurity of AI systems appear to be close to be in their final shape with respect to their wording and objectives and it is admissible to start sharing some few preliminary thoughts and considerations on those forthcoming rules[3].

 2. Cybersecurity and the AI Act

In that context, one might start and observe that the inner spirit of the AI Act inevitably ties with a discussion on cybersecurity.

In fact, the logic fostering the AI Act is primarily intended to warrant protection to fundamental rights[4], which in turn almost necessarily leads to stress the importance of transparency, trust-worthiness and fairness of AI systems[5].

However, those objectives inevitably rest amongst others on solid cybersecurity (i.e., on the combination of different activities necessary to protect network and information systems, the users of such systems, and other persons affected by cyber threats[6]). Indeed, there cannot be safety for fundamental rights if AI systems are left vulnerable and exposed to cyber threats with the possibility that underlying data or software/models can be stolen, manipulated or side-tracked[7].

It is clear evidence that data breaches affecting Big Data[8] (which are necessary to train, develop and instruct AI systems) expose to danger the individuals whose data is referred to. At the same time, one cannot ignore the risk that AI systems which are used to make decisions having an impact on people’s lives operate using data that have been intentionally forged or by means of software/algorithms that are maliciously altered or influenced.

3. Are there too many cybersecurity regulations in the EU?

In light of all the above, it is no surprise that the AI Act addresses the cybersecurity issue and establishes requirements and duties of care for those who create or develop high risk AI systems[9].

However, it is fair to question whether the cybersecurity rules of the AI Act will create a clear regulatory framework once combined with the several cybersecurity rules already springing from the many pieces of legislation existing at EU level, which include : (i) the EU Regulation 2022/2554 (Digital Operational Resilience Act– DORA, which applies to players in the financial sector[10]), (ii) the EU Directive 2022/2555 (Measures for a High Common Level of Cybersecurity across the EU– NIS 2 Directive, which applies to several categories of key service providers[11]), and (iii) the EU Directive 2022/2557 (resilience of critical entities– CER Directive, which applies to the so-called critical entities[12]). By the way, on top of those pieces of legislation (already enacted), we expect in 2024 the approval of the proposal of a Cybersecurity Resilience Act for hardware and software products (CRA)[13].

The cybersecurity requirements of the AI Act will apply in addition to those rules[14]. However, establishing many different rules in many different legal instruments is not always a good idea.

By way of example, a business undertaking can be subject to the cybersecurity provisions of the DORA (because the business operates in the financial sector) and simultaneously to the cybersecurity requirements of the AI Act (because it develops high risk AI systems). The same can happen to the provider of an AI system (implying application of the AI Act) which has developed a piece of software (implying application of the CRA); or to a critical entity that is also an essential service provider (both CRA and CER Directive apply).

The risk of an intricate legal framework is, therefore, there.

EU rulers have tried to limit that risk through the use of presumptions and mutual recognition rules. Art. 8 of the CRA provides that high-risk AI systems compliant with the essential requirements of the CRA “are deemed” per se to be compliant with the cybersecurity requirements of the AI Act.  In turn, the AI Act (art. 42) provides that high-risk AI systems that have been certified or for which a statement of conformity has been issued under the CRA “shall be presumed” to be in compliance with the AI Act.

4. AI Act overlaps with GDPR

That does not totally solve the issue.

AI systems can only be developed and trained by feeding them with large amounts of data, which of course triggers the application of the General Data Protection Regulation (EU Regulation 679/2016). The point is that GDPR has its own security rules[15]. Hence, the cybersecurity requirements in the AI Act overlap with security measures mandatory under the GDPR.

The coexistence of the two is not straightforward, and some discordant elements need to be mentioned.

As a start, GDPR security obligations impose an adequacy standard and adequacy needs to be benchmarked against three elements: technological state of the art, characteristics of the data processing and costs of implementation of the security measures. Now, we will also see later that the cost of implementation is not mentioned in the AI Act when it establishes the cybersecurity standards for high-risk AI players. One can wonder about the significance of that. In principle, two different interpretations are viable. According to a first view, the EU regulators of AI are willing to impose cybersecurity protections for high-risk AI systems regardless of their costs. According to a second view, instead, the cost is, by definition, one of the elements of any adequacy judgement, whether mentioned or not and should always be taken into account.

The second discordant element lies in that the GDPR distinguishes between data controllers and data processors, whilst the AI Act distinguishes between AI providers and AI users. However, it is not certain that AI providers will inevitably be data controllers. That implies some legal doubts. For example, DPIA obligations rest only on the data controller. If an AI provider were not a data controller (but a data processor), then we should take the strange conclusion that such AI provider does not have DPIA obligations under the GDPR albeit it is the one who develops the AI system and therefore it has cybersecurity obligations (by design and by default) under the AI Act.

5. The AI Act rules on cybersecurity

Having said the above, we can now try and spot the rules that the AI Act dedicates to cybersecurity. Those are addressed to high-risk AI system providers and can be located in three Recitals (43[16], 49[17] and 51[18]), two Articles (13[19] and 15[20]), and one Annex (Annex IV[21]).

The overall picture is that high-risk AI systems must comply with cybersecurity requirements against errors, faults or inconsistencies.

Those requirements must be applied (and measured) both during the development phase of the AI system (when it must be tested and validated) and, then, throughout its lifecycle (cybersecurity by design and by default). Therefore, cybersecurity requirements shall be applied ex ante (prevention), in-course (control and resilience) and ex post (remediation and restart).

Cybersecurity is intended broadly: not only as the pool of security measures against external cyber attacks but also as resilience against attempts to alter the use, the behaviour or the performance of the AI system or compromise its security. In that sense, cybersecurity seems to involve a holistic approach with contemporaneous protection of either the datasets, the software/model, the ICT structure and the whole environment in which AI operates[22].

Those very stringent obligations are coupled with an adequacy test. That test has to be regarded taking into account several different facets: the technological state of the art, the “intended purpose” of the AI system, what the AI user would “reasonably expect” from the AI system and, in general, the overall circumstances and inherent risks. On top of all that, the AI provider has a duty to know and foresee relevant circumstances that may have an impact and to disclose them to users.

6. Questionable points

The incredibly cumbersome duties of care established by the norms will originate a number of legal complexities when it comes to their application.

In that regard, we start noting that Articles 13 and 15 mention that high-risk AI systems must comply with a ‘robustness’ requirement and a ‘cybersecurity’ requirement. It is debatable what is the difference between the two adjectives and if the double adjectivation indicates the existence of two separate requirements. Possibly, the two terms might indeed imply different albeit intertwined characteristics. Robustness could in fact be intended as technical solidity originating from validation and test of the mathematical, statistic and scientific bases on which the AI system is rooted (inside element). On the contrary, cybersecurity might point to the ability of the AI system to strive against external attack (outside element). Authorities, authors and practitioners have room to elaborate on that.

Another complexity for the interpreter is presented by the fact that the adequacy, robustness and cybersecurity tests need to be satisfied taking into account the “intended purpose” of the AI system. The complexity of that expression is triple. On one hand, it is uncertain whether the term refers to the purpose of the AI developer or that of the entity procuring the AI system from an external supplier, or that of a reasonable user[23]. On the other hand, it also remains uncertain whether an objective test should apply, which is based on reasonable expectations/reliance of standard business players in the market (rather than on the subjective goals of a specific party). Finally, one could reasonably wonder whether any difference between consumers and professionals should count.

Moving to very practical questions, some uncertainty seems to surround the obligation to measure the cybersecurity performance of AI systems. In fact, which metrics should be used to measure that? And who should make the audit?

7. Civil and corporate law consequences

Besides the fines and the sanctions provided in the AI Act, infringements of cybersecurity rules will be considered as breaches of the duty of care imposed on high-risk AI providers and entail their civil responsibility.

Furthermore, a number of civil law consequences will be called into action by the AI Act.

(1) Failure to comply with the AI Act standards (adequacy and robustness, foreseeability of consequences, transparency, etc.) might indeed cause termination or invalidation of contracts or pre-contractual liability (based on doctrines of unfair negotiation or fraud and deception).

(2)  Boards of directors of companies using AI systems will need to structure adequate organisations, interplays and informational flows among corporate bodies and committees, as well as periodical audits and controls on AI (also to satisfy ESG reporting obligations). Also, decisions of directors concerning AI systems will need to be adequately reasoned to satisfy the business judgement rule.

(3) Compliance with AI Act requirements will influence the valuation of companies and business and that will be reflected in representations and warranties within Sale and Purchase Agreements.

(4) Insurance coverage will likely be sought but limitations might apply depending on the diverse insurance laws in EU Countries.

8. How do we sort out of the ambiguity?

The overlaps between AI Act and other cybersecurity legislation, the potential duplication of fines, and the civil and corporate consequences create quite an alarming picture for those who are or want to be involved in this forefront technology. All the more, when the language of the new rules remains open and might create conflicting interpretations.

A big effort needs to be made to streamline the application of the new rules and avoid them becoming a roadblock to development.

It is our view that the only way to proceed is to define precise technical standards (under the CRA and the AI Act) and indicate that compliance with those technical standards satisfies the legal test. That is precisely what ENISA is trying to achieve, although a number of difficulties hinder and delay the exercise[24].

 

 

[1] https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts

[2] https://eur-lex.europa.eu/EN/legal-content/glossary/trilogue.html

[3] See https://publications.jrc.ec.europa.eu/repository/handle/JRC134461, EU Commission, JRC Science for Policy Report, Cybersecurity of Artificial Intelligence in the AI Act, 2023; See also https://www.enisa.europa.eu/events/2023-enisa-ai-cybersecurity-conference/keynote-enisa-final-june-2023.pdf

[4] https://www.europarl.europa.eu/charter/pdf/text_en.pdf (Charter of Fundamental Rights of the European Union)

[5] , and those objectives lay the foundations of differentiated rules and requirements designed to combine with different risk profiles coming after the distinct characteristics, aims and use-cases of AI systems

[6] Art. 2.1 of the EU Cybersecurity Act, Regulation (EU) 2019/881 on ENISA (the European Union Agency for Cybersecurity) and on information and communications technology cybersecurity certification (https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX%3A32019R0881)

[7] See Flavia Bavetta, https://www.civiltadellemacchine.it/documents/14761743/0/Bavetta+-+La+centralita%CC%80+dei+requisiti+normativi+di+cybersecurity.pdf?t=1653922096330

[8] Big Data are usually defined as huge amounts of data processed with great ‘velocity’, from large ‘volume’ databases containing a ‘variety’ of different data, controlling their ‘veracity’ and, eventually, creating ‘value’.

[9] One of the most discussed points of the new AI Act is the definition and scope of “high risk” AI systems. However, we may say that the general approach is shared, which identifies 4 different categories of AI systems: prohibited, high risk, limited risk, minimal risk (https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai).

[10] https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52020PC0595, see Art. 2. DORA applies from 17 January 2025.

[11] https://eur-lex.europa.eu/eli/dir/2022/2555. See Elena Kaiser, https://www.medialaws.eu/rivista/the-new-nis-ii-directive-and-its-impact-on-small-and-medium-enterprises-smes-initial-considerations/. The Member States have to adopt and publish the measures necessary to comply with the NIS 2 Directive by 17 October 2024.

[12] Critical entities provide essential services in upholding key societal functions, supporting the economy, ensuring public health and safety, and preserving the environment; see https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52020PC0829

[13] https://digital-strategy.ec.europa.eu/en/library/cyber-resilience-act.

[14] The AI Act has a horizontal approach (it regulates the AI systems in whichever sector they are deployed), which implies that it will overlap with the DORA (when AI systems are used in the financial sector), the NIS 2 Directive (when AI systems are used by key service providers), the CER Directive (when the AI systems are used by critical entities) and the CRA (when AI systems are used by software/hardware manufacturers).

[15] inter alia, the obligations under articles 32 – security of processing, 33/34 – obligations in case of data breaches and 35/36 – data protection impact assessments

[16] Recital 43: Requirements should apply to high-risk AI systems as regards the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity.

[17] Recital 49: High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art. The level of accuracy and accuracy metrics should be communicated to the users.

[18] Recital 51: Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.

[19] Art. 13.2/3: High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to users [by specifying]: (b)(ii) the level of accuracy, including its metrics, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity.

[20] Art. 15.1. High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle. […]

15.3. High-risk AI systems shall be resilient as regards errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. […]

15.4. High-risk AI systems shall be resilient as regards attempts by unauthorised third parties to alter their use or performance by exploiting the system vulnerabilities.

The technical solutions aimed at ensuring the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks.

The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent and control for attacks trying to manipulate the training dataset (‘data poisoning’), inputs designed to cause the model to make a mistake (‘adversarial examples’), or model flaws.

[21] Annex IV: The technical documentation referred to in Article 11(1) shall contain at least the following information, as applicable to the relevant AI system: […] 2. A detailed description of the elements of the AI system and of the process for its development, including: […] the validation and testing procedures used, including information about the validation and testing data used and their main characteristics; metrics used to measure accuracy, robustness, cybersecurity and compliance with other relevant requirements set out in Title III, Chapter 2 as well as potentially discriminatory impacts; test logs and all test reports dated and signed by the responsible persons, including with regard to pre-determined changes as referred to under point (f).

[22] In fact, cybersecurity should avoid (i) the poisoning of either the data sets or the AI model, (ii) data breaches, (iii) adversarial attacks and/or (iv) manipulations of the AI system.

[23] The Proposal for Standard Contractual Clauses for the procurement of Artificial Intelligence (AI) by public organisations (Version September 2023 (draft) – High Risk version; https://public-buyers-community.ec.europa.eu/communities/procurement-ai/resources/proposal-standard-contractual-clauses-procurement-artificial) suggests that the intended purpose consist of the goals that the public organisation procuring an AI system from an external supplier was willing to achieve..

[24] To mention some: (1) The notion of AI can include both technical and organizational elements, not limited to software. (2) The application of best practices for quality assurance in software might be hindered by opacity.(3) Determining appropriate security measures relies on a system-specific analysis and sector-specific standards. Standards are dependent on technological development (technological gap: continuous learning and difficulty to check data quality in real time or make continuous validation). Existing standards can partially mitigate the cybersecurity risk but are not enough. (4) There are certain standardization gaps: traceability is an issue when it comes to Big data. (5) Inherent features of machine learning are not reflected in existing standards, especially with respect to metrics and testing.

See Enisa, Cybersecurity of AI and standardization, March 2023.

Share this article!
Share.

About Author

Leave A Reply