Skip to main content

Clifford Chance

Clifford Chance

Artificial intelligence

Talking Tech

The Italian courts lead the way on explainable AI

Embracing the risk-based approach envisaged in the draft EU AI Regulation

22 June 2021

In two recent groundbreaking decisions the Italian courts have taken a close look at how artificial intelligence software works, with the aim of ensuring that the exploitation of AI is unbiased, ethic and explainable.

The Deliveroo case: "Blind" AI leads to discrimination

Following the Italian trade unions' claim that Deliveroo discriminates against riders through its algorithm, the Court of Bologna investigated how Deliveroo allocates deliveries among the riders who apply through the dedicated app.

It emerged that Deliveroo provides its riders with a "flexible self-service booking service" (SSB) with which they book work sessions as follows:

  • To apply for deliveries, riders access the SSB every week in order to make themselves available for that week by selecting the windows during which they will be available to make deliveries;
  • To make themselves available, riders access the SSB every Monday during three different time slots, i.e. 11 a.m., 3 p.m. or 5 p.m.. The earlier a rider accesses the SSB, the more chance they have of finding suitable deliveries windows. So, for example, a rider has a greater chance of being allocated deliveries on a Saturday night (when Deliveroo's customers are more likely to order food deliveries) in a given week if they log into the SSB at 11 am on Monday, rather than at 5 p.m., because at 5 p.m. there are less delivery slots available for that week;
  • Each rider, however, is allowed to access the SSB only in one of the aforementioned three Monday slots, depending on their "reputation ranking" as calculated by 'Frank', Deliveroo's algorithm. 

The reputation ranking is a score that combines two different variables, as follows:

  • The "reliability index", which is in inverse proportion to the number of times the rider failed to attend a work session they had applied for on the previous Monday; and
  • The "peak participation index", which is proportional to the number of times the rider made themselves available for deliveries during the high demand delivery windows, i.e. the windows between 8 p.m. and 10 p.m. on Fridays, Saturdays and Sundays, when Deliveroo's customers are most likely to require food deliveries.

The reputation ranking is materially affected if the rider:

  • Makes a "late cancellation" of a delivery window that the SSB had assigned to them: Riders can only withdraw from a delivery window 24 hours before that window starts, otherwise a late cancellation (occurring less than 24 hours before the start) will have a significant negative impact on that rider's reputation ranking;
  • Fails to log into the Deliveroo app at least 15 minutes before the start of the delivery window the SSB had allotted to them.

Based on the reputation ranking, the SSB then allows riders to access on Mondays as follows:

  • 11 a.m.: 15% of riders, i.e. those having the best reputation ranking;
  • 3 p.m.: 25% of riders with the second-best ranking; and
  • 5 p.m.: The remaining 60% of riders.

Consequently, riders who access the SSB at 11 a.m. have more job opportunities than the others.

As a result the Court of Bologna found that the Deliveroo working system shows an intrinsic discriminatory character, because the way 'Frank' the algorithm calculates the reputation ranking is blind to the reason for a rider's delay in cancelling a delivery window 24 hours beforehand, or logging into the app 15 minutes before the window begins.

The Court concluded that Frank's blindness is discriminatory to the riders, because it deprives them of some of their basic rights in their capacity as employees. For example, Frank treats in the same way – by simply lowering the reputation ranking – cases that are in fact very different, e.g. that of a rider who unjustifiably fails to login, and that of a rider who failed to login for objective and legitimate reasons (e.g. health reasons, child care, exercise of a worker's right to strike).

The Court also commented that it would have been possible for Deliveroo to train Frank not to discriminate, considering that – at the time of the decision – Frank already fixed rankings in two cases, i.e. in the case of an injury on consecutive shifts (provided that there is evidence that it has in fact prevented the continuation of the work) and a technical problem of the site, such an app crash. In the Court's opinion, this showed how the Deliveroo decision to treat riders who were absent from work for legitimate reasons and riders who did not have valid excuse in the same way was totally deliberate, hence Deliveroo was obliged to fix Frank.

The Mevaluate case: privacy consent is void if the user does not know how AI works

This case concerns the provision of a reputational rating service, whereby users could access a web platform made available by a non-profit organisation, Mevaluate, in order to obtain an impartial assessment of their reputational ranking. For example, a job candidate may have used this service in order to show their prospective employer a third party's assessment of their reputation.

In 2016, the Italian DP Authority issued a ban preventing Mevaluate from processing personal data through its web platform, because the processing was inconsistent with the principles of the then applicable Italian Privacy Code, such as: lawfulness of processing and data minimisation, data subject's consent, and processing of sensitive data.  

Mevaluate successfully appealed the decision before the Court of Rome, according to which – in the absence of a regulatory framework governing the provision of reputational rating services – the provision of that kind of services is left to the initiative of service providers like Mevaluate, and the users' use of the platform and of the resulting rating are based on the users' consent to the processing of their personal data through the platform.

Following the Italian DP Authority's appeal, the Court of Cassation overturned the Court of Rome's decision on the following grounds:

  • The key issue is whether – before using the rating platform – the user is sufficiently informed about how the algorithm calculates the rating;
  • To solve that key issue one has to assess not only whether the user gave their consent to the algorithm-based processing, but also whether that consent was lawfully granted, i.e. the user was fully informed about all aspects of the subsequent processing (consistently with the then applicable data privacy law implementing the Privacy Directive);
  • It is the duty of the data controller (the service provider) to provide evidence that the consent given by the user was suitable to cover the actual processing, in this case, that the consent covered the logic underlying the algorithm's calculation of the ranking;
  • The lower court's decision does not deny that the algorithm lacked transparency. The Court of Rome concluded that transparency was not an issue, because it is market recognition that ultimately determines whether a product is worth buying (in other words users end up buying digital services that 'work well', but do not necessarily need to reverse engineer them to know how the algorithm works);
  • The Supreme Court rejected the lower court's reasoning, and concluded that by agreeing to use a web platform, the user cannot be deemed to have agreed to be bound to an algorithm the underlying logic of which is totally obscure to them.
Conclusion

While the decisions above take two different approaches – in the Deliveroo case the Court of Bologna considers the use of AI from an employment law perspective, while in the Mevaluate case the Court of Cassation tackles AI from a privacy angle – they are quite consistent as to the premises on which they are based and their conclusions.

The starting point for both decisions is the view that there is an information asymmetry between the business deploying AI (Deliveroo, Mevaluate) and the user (the Deliveroo riders, the Mevaluate users). It is therefore imperative to prevent the business from taking unfair advantage of that asymmetry. And to avoid that risk, both Italian courts deemed it necessary to first identify the capacity in which users use the AI-based solution: The Court of Bologna's decision starts by saying that riders are to all effects employees; the Court of Cassation takes it for granted that reputation defines and identifies an individual to such an extent that reputation-related data qualifies as personal data, thereby triggering privacy laws.

Both decisions are also based on the assumption that the use of AI significantly infringe certain fundamental individual rights, consistent with the risk-based approach envisaged in the draft AI Regulation currently being discussed by the EU institutions. The two decisions are quite ahead of the curve in this respect because they flag a risk in cases where (i) the algorithm does not take into account all relevant information (e.g. Deliveroo's Frank valued app bugs, but not the rider's justifiable absences) when rectifying a rider's ranking, (ii) the algorithm processes sensitive data (such as an individual's reputation), and/or (iii) the algorithm's decision making is not transparent, so that the user cannot understand the logic behind the algorithm's decision.

Put into the wider context of the EU institutions' approach to AI, the Italian decisions call for greater focus on all aspects of AI development and compliance, hence the following actions should be taken into account by all businesses intending to use AI-based solutions:

  • Identifying all potential risks arising from the use of AI. Environmental, social, governance drivers to be considered alongside the pure business ones;
  • Promoting internal governance and compliance systems aimed at ensuring that AI can be explained (e.g. to users, authorities), and to show how AI pursues algorithmic transparency, data cleanliness, ethics;
  • Identifying remedies (e.g. insurance policies) aimed at limiting the risks associated with the use of AI.

 

Shadiah Obaidi Stagiarie contributed to the writing of this article