AI Regulation Update and Next Steps

On 21 April 2021, the European Commission published its Proposal for a Regulation on Artificial Intelligence (the “AI Regulation”) and invited public feedback on the text of the proposal. The public consultation period ended on 6 August 2021, with over 300 responses received from industry stakeholders, NGOs, academics and others. The high level of engagement with the consultation indicates significant interest in the proposed AI Regulation. The European Commission will now consider the feedback received, and other EU bodies will have their say.

Overview of the AI Regulation

While the definition of “artificial intelligence system” is likely to be the subject of much scrutiny and consequently amendments, it appears to be drafted very broadly with the intention of future proofing it and also to capture not only AI systems that are offered as a stand-alone product but also those relying on AI directly or indirectly. An AI system is defined as “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. The techniques listed in Annex 1 include:

  • Machine learning (including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning).
  • Logic and knowledge-based approaches (including knowledge representation, inductive (logic) programming, knowledge bases, inference/deductive engines, (symbolic) reasoning and expert systems).
  • Statistical approaches, Bayesian estimation, search and optimization methods.

In terms of territorial application, the AI Regulation will apply to:

  1. Providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country;
  2. Users of AI systems located within the Union; and
  3. Providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union.

As can be seen from the above, it is proposed that the AI Regulation will have extra-territorial effect with its effect being felt beyond the EU.

There are some noteworthy exceptions to the scope of proposed Regulation. The AI Regulation will not apply to AI systems developed or used exclusively for military purposes. The Regulation also will not apply to public authorities in a third country or international organisations falling within the scope of the Regulation where those authorities or organisations use AI systems in the framework of international agreements for law enforcement and judicial cooperation with the EU or with one or more Member States

The AI Regulation proposes a risk-based approach to AI systems, and assigns risk based on the proposed use of the system. The three levels of risk identified in the explanatory memorandum that accompanies the proposed Regulation are:

  1. Unacceptable risk;
  2. High risk; and
  3. Low or minimal risk.

Title II of the Regulation sets out a list of prohibited practices which comprise AI systems whose use is deemed unacceptable by virtue of violating fundamental rights. This includes practices that have a significant potential to manipulate vulnerable groups and persons, such as children or those with disabilities, through subliminal techniques.

The proposed Regulation focuses mainly on high-risk AI systems, which will not be strictly prohibited, but will be subject to strict compliance obligations, as well as technical and monitoring obligations.

In terms of enforcement, the AI Regulation will require that Member States lay down the rules on penalties, including administrative fines, applicable to infringement of the Regulation which must be “effective, proportionate, and dissuasive”. As such, it seems to be taking its lead from the General Data Protection Regulation. The AI Regulation will set out thresholds for administrative fines with the most severe breaches being subject to fines of up to €30 million or, if the offender is a company, up to 6% of its total worldwide annual turnover for the preceding financial year if that figure is higher.

Next Steps

In order to become law, the AI Regulation must go through the EU’s ordinary legislative procedure, which requires consideration and approval of the proposed Regulation by the Council of the European Union (the “Council”) and the European Parliament (the “Parliament”).

An early indication of the Parliament’s attitude towards AI was seen in its adoption on 6 October 2021 of a non-binding resolution concerning the use of artificial intelligence by the police and judicial authorities in criminal matters. In that resolution, the Parliament called for, amongst other things, a ban on the use of facial recognition technology for law enforcement purposes that leads to mass surveillance in publicly accessible spaces.

On 18 June 2021, the Council published a progress report on its consideration of the AI Regulation. That report notes that the Presidency of the Council has identified general support for the overall objectives of the proposal and its approach with regards to those parts of the Regulation that had been debated at that stage. However, the progress report also notes that the discussions are still at a very early stage and that most delegations were keen to emphasise that as the proposal is “highly technical, transversal and complex” developing national positions will take some time. With that caveat in mind, the Progress Report does note some early queries have been raised including with regards to the definition of “artificial intelligence system” (see above) which it is felt may be too broad and, in conjunction with the list of approach and techniques in Annex I, could potentially include more traditional software systems that should not fall within the AI Regulation.

Future Proofing

It is noteworthy that the current text of the AI Regulation proposes a period of 24 months before the law will be applicable. However, the explanatory memorandum published with the text of the proposed Regulation notes that the Commission is alive to the importance of ensuring that technological developments do not render the Regulation obsolete. This future proofing is to be achieved by way of delegated acts, which will allow the Commission to amend the Annexes to the Regulation to take account of advancements in AI technology.

This document has been prepared by McCann FitzGerald LLP for general guidance only and should not be regarded as a substitute for professional advice. Such advice should always be taken before acting on any of the matters discussed.