Performance Management Algorithms: Putting Artificial Intelligence to Work

The Italian data protection authority has imposed a €2.6 million GDPR fine on Foodinho, an app-based food delivery company, for privacy violations relating to its use of a performance management algorithm. The decision is the first of its kind in the sphere of the algorithmic management of gig workers and has implications for wider legislative and policy developments relating to AI and to the use of AI systems for performance management.

On 5 July 2021, the Garante, the Italian Supervisory Authority under the GDPR, imposed a fine on Foodinho S.r.l. of €2.6 million for its use of performance management algorithms in connection with its employees. The authority held Foodinho to be in breach of the GDPR principles of transparency, security, privacy by default and by design, and held it responsible for not implementing suitable measures to safeguard its riders’ rights and freedoms against discriminatory automated decision making. According to the Garante, Foodinho’s management violated Article 22(3) of the GDPR.

In its decision, the Garante explained that Foodinho was carrying out two types of automated processing activities. The first was an internal scoring system known as the “excellency system” comprising a mathematical formula rating riders based on feedback from customers and business partners and delivery rates. The second formed part of the system that assigned orders or jobs using an internal algorithm.

Following the Italian Supervisory Authority’s decision, it ordered Foodinho to make changes to the way it uses algorithmic management within 60 days of being notified of the decision. Foodinho is also required to brief the Garante on the measures taken pursuant to these orders within 90 days of the notification. The Garante ordered Foodinho to comply with GDPR and put in place systems to monitor the accuracy of data created by its algorithms and, in particular, to prevent the improper or discriminatory use of reputational mechanisms based on feedback. Fundamentally, Foodinho is required to protect the rights and freedoms of its employee riders against decisions taken solely by automated means.

AI and Performance Management

The Garante’s decision emphasises the challenges in operating AI systems which manage gig workers and comes amid broader moves internationally to examine and legislate for artificial intelligence and its anticipated risks. The French data protection authority (“CNIL”) published a paper in May 20201 on algorithmic discrimination, examining how algorithms can lead to discriminatory outcomes, including recommendations on how to identify and minimize algorithmic biases. Since AI is a self-learning system, the underlying issue appears to be the limited control on how data is processed and algorithmic outputs which may render it difficult to question a decision an AI system has made based on variable data patterns and automated processes.

How to legislate for AI?

As part of these wider international moves to create policy around and legislate for artificial intelligence, the Irish Government’s new National Artificial Intelligence Strategy, AI: Here for Good, A National Artificial Intelligence Strategy for Ireland2 (Find our briefing, here) seeks to create opportunities to exploit AI in a positive way, to retain Ireland’s global competitiveness and future productivity and emphasises the need for a people-centred, ethical approach to creating an AI-friendly environment.  With digital sovereignty one of the two key pillars of the European Commission’s strategy under President Von der Leyen, Member States are seeking to ensure that they are at the cutting edge of developments in AI.

AI in Ireland is not currently subject to any specific legislation, save that GDPR provides for specific obligations regarding automated decision making in relation to personal data. In February 2020 the European Commission published a White Paper on AI as part of its European Data Strategy and in April of this year, the Commission published its proposal for an EU Regulation, the first ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally (Find our briefing, here). It aims to provide AI developers, deployers and users with clear requirements and obligations regarding specific uses of AI and is part of a wider AI package, which also includes the updated Coordinated Plan on AI, which aims to guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU. Similarly the Digital Services Act will require service providers to give information about any use of automatic means for the purposes of content moderation, including a specification of the precise purposes, indicators of the accuracy of the filters used and safeguards applied.

While EU legislators are moving to increase regulation around AI, the UK is moving ahead on its own, with a Taskforce on Innovation, Growth and Regulatory Reform report3 in June 2021 making recommendations to the Prime Minister on how the UK can reshape its approach to regulation post-Brexit. It has proposed replacing the UK General Data Protection Regulation 2018, the UK GDPR with a new Framework of Citizen Data Rights. The report makes specific reference to the reform of regulation around AI and specifically recommends removing Article 22 of the GDPR, which sets out a general prohibition (and the exceptions) for fully automated decision-making.

AI and Performance Management: Where to next?

In the context of much wider legislative and policy developments in the area of AI, the Garante’s decision in relation to Foodinho sets a marker in relation to the challenges and risks existing in relation to the use of AI for performance management. As the technologies develop and new challenges arise, companies must ensure they have mechanisms in place to protect employees and customers in line with GDPR, as well as incoming European legislation and national guidelines. It is a clear signal that companies using AI technology to manage employee performance must implement robust strategies to offset the risks arising from AI systems and automated decision making. It is also worth noting that in the current draft of the Digital Services Act, service providers would be required to provide information about any use of automatic means for the purposes of content moderation, including a specification of the precise purposes, indicators of the accuracy of the filters used and safeguards applied.

Also contributed by Emily Cunningham.


  1. Algorithmes: prévenir l’automatisation des discriminations (Défenseur des Droits & Commission nationale de l'informatique et des libertés, 29 May 2020).
  2. AI: Here for Good, A National Artificial Intelligence Strategy for Ireland (Department of Enterprise, Trade and Employment, July 2021).
  3. Rt Hon Sir Iain Duncan Smith & Ors, Taskforce on Innovation, Growth and Regulatory Reform independent report (16 June 2021).

This document has been prepared by McCann FitzGerald LLP for general guidance only and should not be regarded as a substitute for professional advice. Such advice should always be taken before acting on any of the matters discussed.