The EU Artificial Intelligence Act, and what it may mean for competition law

The European Parliament and the Council of the European Union are currently discussing the European Commission’s proposed Artificial Intelligence Act.  The objective of the act is to promote the uptake of artificial intelligence, while at the same time also addressing the risks associated with certain uses of the technology. 

Whilst the proposed new law aims to impose the new rules “without prejudice to the application of Union competition law”, it is likely to have a significant impact on the procedural and investigative powers of competition agencies. 

This briefing provides a brief introduction to the proposed law and considers the implications of its implementation for competition law in Ireland. 

Introduction to the AI Act

As part of the European Union’s Digital Strategy aimed at enabling Europe’s digital transformation, and further to President von der Leyen’s political commitment to put forward legislation for a coordinated European approach on the human and ethical implications of artificial intelligence (AI), the Artificial Intelligence Act (the AI Act) will be the world’s first comprehensive AI law.    If adopted, it will have direct effect in Ireland and the regulation will automatically become part of Irish law.

The AI Act aims to ensure the proper functioning of the single market by creating the conditions for the development and use of trustworthy AI systems in the Union.  It proposes to achieve this by:

  1. Ensuring that AI systems placed on the EU market are safe and respect existing EU law;
  1. Ensuring legal certainty to facilitate investment and innovation in AI;
  1. Enhancing governance and effective enforcement of EU law on fundamental rights and safety requirements applicable to AI systems; and
  1. Facilitating the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.

The European Commission draft proposes providing supervisory agencies with procedural powers that will have implications for national competition authorities across Europe, by indirectly extending the investigative powers of those authorities.

In order to enable designated national supervisory authorities to fulfil their obligations, the European Commission draft of the AI Act provides that they will have, inter alia:

full access to the training, validation and testing datasets used by the provider including through application programming interfaces (‘API’) or other appropriate technical means and tools enabling remote access. 

In addition, the European Commission proposes, in its draft of the AI Act, to provide surveillance authorities with the power, upon reasoned request, to obtain access to the source code of the AI system (where it is deemed necessary).  The European Parliament has sought to limit these provisions, by replacing the power to request access to “source code” with a power to request access to “training and trained models of the AI system, including its relevant source parameters”, and requiring that all other reasonable ways to verify conformity be exhausted before such access is requested. 

Article 63 of the European Commission draft AI Act requires that the national supervisory authorities “without delay” report to both the Commission and national competition authorities, “any information identified in the course of market surveillance activities that may be of interest for the application of Union law on competition rules.”  Access to this type of data will be significant for competition authorities. 

Implications for the enforcement of competition law

Competition agencies are usually only able to compel information from companies, by way of a formal information request, when they suspect that an infringement of competition law has occurred.  A similar approach is adopted in the Digital Markets Act (DMA), which gives the Commission power to carry out inspections and request information where it suspects an infringement of the DMA.  However, per the European Commission draft of the AI Act, the bar for obligating the sharing of information gathered by the national supervisory authorities with the competition agencies, appears to be much lower, and arises regardless of whether any infringements are suspected or alleged.    

The draft AI Act provides the national supervisory authorities access to data for the process of carrying out the required conformity checks.  It requires that they supply this data to the competition agencies when it “may be of interest” to them.  There is no requirement that there be any suspicions of anti-competitive conduct.  In a world where companies are increasingly utilising algorithms to make competitively strategic decisions, it is not difficult to imagine that there may be a lot of information available to the national supervisory authorities that would be of interest to their competition law enforcement colleagues.

Implications for business

As currently drafted, the obligations of the AI Act will apply on a sliding scale based on the potential risks posed by the intended use of the particular AI system:

  • Unacceptable level of risk – AI systems that are considered to pose this level of risk will be strictly prohibited from being developed, deployed or from having an effect in the EU.  Examples include systems using “harmful manipulative ‘subliminal techniques’”, systems used by public authorities for “social scoring purposes”, and “’real-time’ remote biometric systems in publicly accessible spaces for law enforcement (subject to certain exceptions).
  • High risk – AI systems will be subject to extensive regulation, and will be subject to the conformity assessments, registration and post-assessment monitoring.  Compliance is likely to require providers to significantly alter their engineering processes.  Examples of high-risk AI systems include those utilised for the management of critical infrastructure, education, employment and worker management, access to essential services, law enforcement, immigration control, and the administration or justice and democratic processes.
  • Limited risk – AI systems which are regarded as posing a limited risk will be subject to certain transparency obligations.
  • Low and minimal risk – AI systems at this level will not be subject to any obligations. 

Companies with high-risk AI systems that require conformity assessments must be prepared to make the required data available, knowing that it may be provided to their national competition agency and/or the European Commission.  Whilst it is not yet clear when the AI Act will come into force, before it does, given the structural interlinking of competition concerns in the draft legislation, such companies should take steps to understand if, and to what degree, that could expose them to the risk of an investigation by the relevant competition authority, and where necessary, take pre-emptive steps to minimise that risk.

For more information about the proposed AI Act, or to discuss what it may mean for your organisation, please contact Laura Treacy, Partner, Antitrust & Competition, Doug McMahon, Partner, Technology, Data Protection, or your usual contact in the firm. 

Also contributed to by Beverley Williamson

This document has been prepared by McCann FitzGerald LLP for general guidance only and should not be regarded as a substitute for professional advice. Such advice should always be taken before acting on any of the matters discussed.