Skip to main content

Clifford Chance

Clifford Chance

Artificial intelligence

Talking Tech

Inclusive Artificial Intelligence: a legal perspective

Building trust around AI

Artificial Intelligence 8 November 2021

As Malcolm Gladwell suggests in his latest book, 'Talking to Strangers: What we should know about the people we do not know', we as human beings are by nature trusting of people, technology and situations more generally: in short, we trust more than we suspect. And because of that we sometimes default to truth, since we fail to decipher strangers.

Artificial intelligence (AI) can be a stranger whose face (the algorithm's logic) is very hard to read, because AI's development may have been animated by the best intentions (e.g. providing customers with a great product), but then fail to take into account all the little things that make each customer say 'this product was designed for me'. When AI goes wrong, it may even discriminate people or jeopardise fundamental rights of parents, workers or minorities as a result of embedding biases in the algorithm's logic. And while the pursuit of a totally bias-free AI may be as utopistic as a bias-free individual (biases are part of our perception of the word, after all), an extra-effort to explain the logic underlying the algorithm may put the user in a position to approach AI in a conscious and well-informed way. This is also the lesson we can learn from the latest court decisions in Europe: software explainability and algorithmic trasnsparency are key in the pursuit of truly inclusive AI. Let's take a look at how judges and authorities in the EU are approaching the issue.

The use of artificial intelligence offers great business opportunities, but also triggers significant risks, such as unintentionally embedding biases in the decision-making process, as well as breaches of privacy, labour, and human rights. Recent courts' decisions and institutions' draft regulations call for transparent and explainable AI as a means to pursue inclusive, bias-free and ethic AI.

Since the beginning of 2021 there has been a significant increase in the scrutiny of organisations' AI use. Over the course of the current year, for example, there have been several decisions across Europe (four cases in Italy alone, and a class action has just commenced) where the courts and authorities have taken a close look at how artificial intelligence software works, with the aim of ensuring that the exploitation of AI is unbiased and ethic. 

Gig economy under the spotlight

A number of the abovementioned decisions involved big players in the gig economy, such as Deliveroo, Uber, Foodinho. These decisions usually follow complaints from workers or users, who claim that the way the business deploys AI results in discrimination or in a breach of certain fundamental human, privacy or employment rights.

Assume, for example, you are a parent and a delivery person (e.g. a rider): algorithmic discrimination may occur in cases where the AI system tasked with the allocation of delivery slots to riders excludes you from work assignments as a consequence of your limited availability slots due to reasons such as childcare.

When does AI discriminate?

This issue was discussed within the Deliveroo case: the trade unions filed for litigation before an Italian court on allegation that the algorithm used by the delivery platform to rank riders and allocate deliveries is discriminatory, because the app downranks riders simply because they failed to make a delivery, irrespective of whether the rider in question had justifiable reasons for their absence (e.g. health reasons, child care, exercise of a worker's right to strike).

Similarly, the Italian privacy watchdog issued fines against Deliveroo and Foodinho on grounds of GDPR breaches resulting, among others, from those businesses' failure to (i) implement suitable safeguards to ensure accuracy and fairness of the algorithmic results that were used to rate delivery riders’ performance and allocate deliveries, and (ii) set up procedures to put the delivery rider in a position to enforce their right to ""obtain human intervention, express a personal point of view, and challenge the decisions taken by the algorithm."

Better governance - -> more inclusive AI

The abovementioned decisions take different approaches – e.g. some consider the use of AI from an employment law perspective, while others tackle AI from a privacy angle – but they all support the same vision: avoiding the risk of AI being blind to the various needs and differences that characterise users, workers and … human beings more generally.

There is an information asymmetry between the business deploying AI and the user (e.g. the riders). But, if the logic used by the algorithm can be explained, the user can approach AI in an informed way, reducing the risk of the user being helplessly subject to the AI decision making.

Put into the wider context of the EU institutions' approach to AI, both the above decisions and the draft AI regulation call for greater focus on all aspects of AI development and compliance. Hence, the following actions should be taken into account by all businesses intending to use AI-based solutions:

  • Identifying all potential risks arising from the use of AI. Environmental, social, governance drivers to be considered alongside the pure business ones;
  • Promoting internal governance and compliance systems aimed at ensuring that AI can be explained (e.g. to users, authorities), and to show how AI pursues algorithmic transparency, data cleanliness, ethics
  • Identifying remedies (e.g. insurance policies) aimed at limiting the risks associated with the use of AI.

So, is transparent AI "soley" a matter of compliance with the letter of the law?

No. This is a matter of building trust around AI.

If the logic underlying the algorithm is obscure, the user is prevented from understanding whether AI can be good or bad to him/her personally: as the Italian Court of Cassation concluded in the recent Mevaluate decision, for example, when a web platform providing reputation ranking services relies on an algorithm to produce reputation scores, users' consent to that processing will not be valid if they had no knowledge of the logic and key elements of the algorithm.

So, for users to be able to really make the most of a technology like AI, it is imperative that they are put in a position to use it in an informed way. By getting to know the potential outcome (and even the flows in the reasoning) of AI, the user can foresee the consequences of its use.

This should not make AI developers or businesses exploiting AI feel less responsible for ensuring bias-free AI. It is quite the contrary: the success of an AI-based product will not only depend on its compliance with the letter of the law; it will mostly rely on who builds the best product, i.e. the product which uses AI that was trained using high-quality data that does not encompass biases, and that ultimately makes no customer feel as if he/she has been 'left out' by the algorithm as a result of his/her gender, race, job or else.