Council Publishes Proposed Amendments to Draft AI Regulation

On 21 April 2021, the European Commission published its Proposal for a Regulation on Artificial Intelligence (the “AI Regulation”) and invited public feedback on the text of the proposal. The public consultation received over 300 responses including from industry stakeholders, NGOs, academics and others indicating significant interest in the proposed AI Regulation. In another significant development, on 29 November 2021, the Presidency of the Council of the European Union (the “Council Presidency”) published a partial compromise text of the AI Regulation including some significant amendments, which are set out below.

Overview of the AI Regulation and Proposed Amendments

An AI system is defined in the Commission’s proposal as “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. This definition appeared to be drafted very broadly with the intention of future proofing it. In the Council Presidency’s compromise text, the definition of “AI System” has been rewritten to include an explicit reference indicating that any such system should be capable of determining how to achieve a given set of human defined objectives by learning, reasoning or modelling. The Council Presidency states that this is to ensure greater legal clarity and is also intended to prevent the inclusion in the scope of the proposed Regulation of more traditional software systems that are normally not considered as artificial intelligence.

The techniques listed in Annex I, which are referred to in both the original and amended definition of AI system, have not been amended in the Council Presidency’s compromise text. The techniques are:

  • Machine learning (including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning).
  • Logic and knowledge-based approaches (including knowledge representation, inductive (logic) programming, knowledge bases, inference/deductive engines, (symbolic) reasoning and expert systems).
  • Statistical approaches, Bayesian estimation, search and optimization methods.

Scope

In terms of application, the Council Presidency has proposed amendments to the existing scope of the AI Regulation so that it will apply to:

  1. Providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are physically present or established within the Union or in a third country;
  2. Users of AI systems who are physically present or established within the Union; and
  3. Providers and users of AI systems who are physically present or established in a third country, where the output produced by the system is used in the Union.

In addition, the Council Presidency proposes extending that list to include the following groups:

  1. importers and distributors of AI systems;
  2. product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark; and
  3. authorised representatives of providers, which are established in the Union.

As can be seen from the above, it is proposed that the AI Regulation will have extra-territorial effect with its effect being felt beyond the EU.

There are some noteworthy exceptions to the scope of the proposed AI Regulation. The AI Regulation will not apply to AI systems developed or used exclusively for military purposes. The AI Regulation also will not apply to public authorities in a third country or international organisations falling within the scope of the AI Regulation where those authorities or organisations use AI systems in the framework of international agreements for law enforcement and judicial cooperation with the EU or with one or more Member States. The Council Presidency proposes adding to the list of exceptions so that the AI Regulation:

  • Shall not apply to AI systems, including their output, specifically developed and put into service for the sole purpose of scientific research and development; and
  • Shall not affect any research and development activity regarding AI systems in so far as such activity does not lead to or entail placing an AI system on the market or putting it into service.

Risk-Based Approach

The AI Regulation proposes a risk-based approach to AI systems, and assigns risk based on the proposed use of the system. The three levels of risk identified in the explanatory memorandum that accompanies the proposed Regulation are:

  1. Unacceptable risk;
  2. High risk; and
  3. Low or minimal risk.

Title II of the AI Regulation sets out a list of prohibited practices which comprise AI systems whose use is deemed unacceptable by virtue of violating fundamental rights. This includes practices that have a significant potential to manipulate vulnerable groups and persons, such as children or those with disabilities, through subliminal techniques. The Council Presidency’s compromise text proposes a number of amendments in this regard. The compromise text extends the prohibition of using AI systems for social scoring to refer to private actors as well as public authorities. With regards to the use of real-time biometric identification systems in publicly accessible spaces by law enforcement, the compromise text broadens the list of objectives for which law enforcement should be allowed to use such systems. Further, the compromise text modifies the prohibition on the use of AI systems that exploit the vulnerabilities of specific group of persons to include persons who are vulnerable due to their social or economic situation.

The proposed AI Regulation focuses mainly on high-risk AI systems, which will not be strictly prohibited, but will be subject to strict compliance obligations, as well as technical and monitoring obligations. The Council Presidency states that Article 6 of the Regulation, which concerns classification rules for high-risk AI systems, has been rewritten in the compromise text “to clarify the logic behind the classification for high-risk AI systems, and its interconnection with Annexes II and III.” Annex III of the AI Regulation, which lists certain high risk AI systems, has also been updated to include, amongst other things, AI systems intended to be used to control or as safety components of digital infrastructure and AI systems intended to be used to control emissions and pollution.

Enforcement

In terms of enforcement, the AI Regulation will require that Member States lay down the rules on penalties, including administrative fines, applicable to infringement of the AI Regulation which must be “effective, proportionate, and dissuasive”. As such, it seems to be taking its lead from the General Data Protection Regulation. The AI Regulation will set out thresholds for administrative fines with the most severe breaches being subject to fines of up to €30 million or, if the offender is a company, up to 6% of its total worldwide annual turnover for the preceding financial year if that figure is higher. As noted below, enforcement and compliance is an area which the Council Presidency has highlighted for future consideration.

Future Proofing

The explanatory memorandum published with the text of the proposed AI Regulation notes that the Commission is alive to the importance of ensuring that technological developments do not render the AI Regulation obsolete. This future proofing is to be achieved by way of delegated acts, which will allow the Commission to amend the Annexes to the AI Regulation to take account of advancements in AI technology. Notably, in the compromise text, the Council Presidency has included a new reporting obligation for the Commission whereby it will be obliged to assess the need for amendment of the lists in Annexes I and III every 24 months following the entry into force of the Regulation, and to present the findings of the assessment to the Parliament and the Council.

Next Steps

In order to become law, the AI Regulation must go through the EU’s ordinary legislative procedure, which requires consideration and approval of the proposed Regulation by the Council and the European Parliament (the “Parliament”).

An early indication of the Parliament’s attitude towards AI was seen in its adoption on 6 October 2021 of a non-binding resolution concerning the use of artificial intelligence by the police and judicial authorities in criminal matters. In that resolution, the Parliament called for, amongst other things, a ban on the use of facial recognition technology for law enforcement purposes that leads to mass surveillance in publicly accessible spaces.

The publication of the Council Presidency’s compromise text is a significant step in the Council’s consideration of the AI Regulation. However, the Council Presidency has noted that it is only a first, partial consideration of the Regulation. In a progress report published on 22 November 2021, the Council Presidency also identified “additional, potentially more complex issues that will require further analysis during subsequent discussions”, such as requirements for high-risk AI systems provided for in Chapter 2 of Title III of the Regulation. The Council Presidency notes that many delegations have indicated that those requirements are sometimes slightly vague and should be better defined. Other areas identified for future consideration include the responsibilities of various actors in the AI value chain, and compliance and enforcement.

Once adopted, the AI Regulation will come into force twenty days after it is published in the Official Journal. However, the Commission’s proposed text of the AI Regulation proposes a period of 24 months before the law will apply.

This document has been prepared by McCann FitzGerald LLP for general guidance only and should not be regarded as a substitute for professional advice. Such advice should always be taken before acting on any of the matters discussed.