Skip to main content

Clifford Chance

Clifford Chance

Artificial intelligence

Talking Tech

Facial Recognition Technology – the race to regulate

Does your face fit?

Big Data Data Privacy 1 September 2020

Facial recognition technology ("FRT") has a variety of uses which range from facilitating access to our personal devices, to allowing enhanced surveillance of public spaces by governments and police forces. It is becoming an increasingly common part of everyday life. The benefits of FRT have been well documented, but the technology has also been subject to much criticism due to concerns over how the use of this technology may conflict with privacy and human rights laws.

The tumultuous political climate across the world and, in particular, the Black Lives Matter movement, have had a significant impact in bringing concerns of racial profiling and violations of basic human rights and freedoms to light, resulting in the spotlight being placed on some of the industry's biggest players. Amidst concerns that the use of FRT is quickly outpacing the law due to the lack of regulation around it, some of the world's largest technology businesses have announced decisions to halt the supply of their FRT to police forces in the U.S. until lawmakers enact appropriate legislation to regulate its use. In a landmark judgement in August 2020, the UK Court of Appeal ruled that the use of an automated FRT surveillance system by the South Wales Police force unlawfully breached privacy rights. This decision followed an appeal brought by Liberty, a human rights organisation, who argued that its use was contrary to data protection laws and resulted in racial discrimination.

In Europe, the legal position on FRT is somewhat in flux. The European Commission has been hesitant to make a binding decision on the use of FRT and is allowing individual member states to make their own assessment to consider whether its use should be banned in public spaces.

In this article, we discuss the mechanics of FRT, why further regulation on the use of the technology is needed (and soon), and explore key considerations companies will have to make when implementing FRT into their organisations.

What is Facial Recognition Technology?

FRT uses algorithms and associated hardware that can analyse a person's face to make a claim of an identity by estimating the degree of similarity between two faces. The technology can map facial features from a photograph or video of an individual to create a "template" of an individual's face, which can be recognised by the system later, or compared with other templates.

FRT systems will capture an image of the subject's face, and assess the geometry of the subject's face, noting details such as eye and mouth location and distance between facial "landmarks" such as the nose, chin, or ears. This analysis will produce a unique "facial signature" which is converted into algorithmic data.

The Centre for Data Ethics and Innovation– an advisory group to the UK Government – have identified two categories of FRT:

(1) facial verification technology systems that determine whether a face matches a single existing facial template (for example, the through the use of e-security gates at airport security); and

(2) facial identification technology systems that match one face to a database of many, by determining whether a face matches any facial template within that database of individuals (for example, social media technology that suggests friends to 'tag' in your photos).

In the majority of these cases, the individual is making a conscious decision to use FRT because it is considered secure and efficient. Of course, even when an individual has made a conscious decision to use FRT, it still raises concerns from a data protection and ethics perspective. For example, FRT can still be controversial as it tends to be less accurate on individuals with darker skin. Researchers for the National Institute of Standards and Technology in the United States found that FRT algorithms falsely identified African American and Asian faces 10 to 100 times more than Caucasian faces. There is also the assumption that the individual's data is only being used for the immediate purposes, however, in each case it is possible that other activities are taking place in the background, either then or in the future, unbeknownst to the individual.

When used in a surveillance context, FRT is an augmentation of CCTV for several reasons – it can be used both retrospectively and live, it will compare your image against a database of images, and it can be fully automated. When FRT is used for identification purposes, the captured image is converted to greyscale, cropped, and converted to a "template" to be used for facial comparisons. The operator can then search for the image and compare it to other templates on file to determine whether the face is a match for another image in the database.

Traditional CCTV poses several questions from a data privacy perspective and its use is controversial in its own right. However, even though CCTV may collect information deemed to be sensitive personal data in some circumstances (such as an individual's ethnicity), the general view is that CCTV is acceptable where it is used for security purposes, provided appropriate (and prominent) notices are in place to inform individuals of its use, and individuals can reasonably suspect CCTV to be recording them (such as in a shopping centre).

Whilst FRT has all the hang-ups of CCTV, it has the potential to be even more invasive. There is also an overt use of biometric personal data which is "special category personal data" under the GDPR and potentially sensitive under other data protection regimes, therefore, making any legitimate interests assessment (which involves balancing the purpose being pursued by the data controller (i.e. detection or prevention of crime) against the adverse effects on the rights and freedoms of the individual) more complicated.

Limitations

To date, there is no instrument in the UK that solely regulates FRT and associated technologies in significant detail. In the UK, the digital images and any output (by way of reports or profiles generated by FRT) will be considered "personal data" under the GDPR and the Data Protection Act 2018 (DPA). The use of FRT for the purposes of law enforcement are also regulated by the Protection of Freedoms Act 2012, the Equality Act and the Human Rights Act 1998. Critics have argued that the current disjointed legislation fails to adequately address the complexity of FRT. Any company planning to use FRT-derived biometric data must comply with the varied regulatory rules that apply to FRT. And the regulatory rules keep evolving. To take one example, in March 2020, Washington became the first US state to pass legislation to regulate the use of FRT. For organisations and bodies operating in differing territories, understanding the limitations and different rules that apply will continue to be of importance.

GDPR challenges

The GDPR requires organisations within its scope to meet several requirements to protect personal data that they process. The GDPR's the core principles that must be followed when processing an individual's personal data, include:

Transparency: Personal data must be processed lawfully, fairly and in a transparent manner. These requirements may be met in a surveillance context by, for example, providing data subjects with extensive information about any processing being undertaken via a privacy notice and appropriate signage.

Legitimacy and basis for processing: Personal data must be collected for a specified, explicit and legitimate purpose that is defined at the time the personal data is collected. Consent can be a basis for processing but is sometimes difficult to obtain in a manner that is fair, and sufficiently well-informed. In a surveillance context, consent is not always possible or practical to obtain. It may be that the organisation's 'legitimate interest' will be the legal basis for processing, and therefore, a legitimate interests assessment will have to be undertaken. That involves balancing the risks to individuals against the interests of the organisation – for the more covert uses of surveillance, sometimes the risks will be too high.

Data security: Data controllers must ensure that appropriate technical organisational security measures are in place so that any personal data obtained through FRT remains secure. Even where companies purchase FRT solutions (as opposed to building their own technology) – they must invest in, and require their suppliers to use, security solutions to support the technology and to adequately protect the personal data against unauthorised or unlawful processing and against accidental loss, destruction or damage. For large scale FRT projects, this will involve significant investment and checking of the security measures of relevant suppliers.

Data minimisation: Personal data collected through FRT should be limited to what is necessary for the purposes for which they are processed and should not go beyond the explicit purposes previously outlined by the data controller. Mass surveillance will often cut against this principle, and so 'privacy by design' in creating the FRT solutions should be observed to seek to overcome potential privacy issues.

Individuals' Rights: Data controllers must facilitate the exercise of individuals' data protection rights. This includes a right of access to their personal data, to have their personal data rectified where incomplete or inaccurate or deleted (in certain circumstances), a "data portability" right to transfer their personal data and to object to the processing of their personal data on grounds relating to their particular situation which are based on legitimate interests (including profiling).

Sensitive / "special category" personal data: The processing of sensitive/special category data is subject to further stringent protections due to the specific risks and implications it carries for fundamental rights. Article 4(14) of the GDPR defines biometric data as "personal data resulting from specific technical processing, relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic [fingerprint] data". Biometric data will be "special category data" when it is used to uniquely identify an individual (article 9(1) of the GDPR).

Processing of special category data can only take place on a limited number of grounds. It requires the usual basis for processing (article 6 of the GDPR), but it must also fall within one of the exceptions under article 9 of the GDPR, including consent. However, it is unlikely that companies will receive explicit consent from every individual to use FRT and where relying on consent, the data subject must have the option to opt out – which would be difficult with FRT.

In cases where FRT is used for security or surveillance purposes, organisations may seek to justify its use with the "substantial public interest" limb. This requires companies to meet additional conditions and safeguards set out in UK law, under Schedule 1 of the DPA. Companies are required to have an appropriate policy document in place and where relying on the condition that the use of FRT is necessary for "preventing or detecting unlawful acts". They must also demonstrate that their specific processing is (i) "necessary for reasons of substantial public interest" and (ii) must be carried out without the consent of the data subject so as not to prejudice those purposes. Therefore, given the inherent risks of special category data, it is not enough to make a vague or generic public interest argument. In reality, these policy documents would need to pay more than lip service to the compliance risks they seek to create organisational controls for. Demonstrating that the use is "necessary" will not be simple – it is a high bar, requiring robust justifications (i.e. given the significant risks that FRT poses to individuals' privacy).

Where the FRT use case is likely to pose a high risk to the privacy of individuals (read: often), a data protection impact assessment may be mandatory (and, in any event, would be best practice). This is an opportunity to pause on the risks and how to mitigate these. Where the FRT processing cannot be altered such that the risk is minimised, the organisation seeking to use it may need to engage with the regulators before the processing activities begins – it is therefore important that these assessments take place at the earliest stages possible.

The future of FRT

Around the world, employees have started the process of returning to work following the COVID-19 pandemic. As a result, many companies have implemented technologies to limit physical contact between individuals and curb the spread of infection. To achieve this, some companies have incorporated thermal imaging with FRT systems to track and trace those who may be infected and prevent those who are infected entering certain buildings. In Singapore, the Ramco Innovation Lab produced FRT which will enable organisations to recognise and track employees and visitors with elevated temperatures. The Italian government have procured FRT kits from Polysense, a technology company based in Beijing, which can be used to monitor crowds in public places. The system can store up to 65,000 facial images and has integrated gate and door access protocols.

FRT plays a key role in technology systems used around the world. However, for the technology to be effective while also respecting user privacy, governments must ensure that they have implemented a comprehensive legal framework to regulate the use of FRT. Without the appropriate regulation, is it likely that the use of FRT will undermine an individual's privacy and may also entrench bias due to inaccuracies (especially where systems have varying accuracy rates for different demographic groups). Technology companies may also continue to demonstrate reluctance in providing FRT software to companies and governments, with the concern that they are facilitating mass surveillance and encroachment on user rights by providing such organisations with a disproportionate power to survey the population. For example, the use of FRT at demonstrations and protests (such as in the South Wales Police ruling), may discourage people from attending such events, and utilising their rights to freedom of speech and freedom of association.

The UK's Information Commissioner's Office (ICO) has called for a new binding code of practice to give clearer guidance on what can be considered a strictly necessary deployment of FRT, including guidelines for targeted watchlists and locations. Furthermore, in July 2020 the ICO and the Office of the Australian Information Commissioner (OAIC) announced the commencement of a joint investigation into the data processing practices of Clearview AI Inc, in connection with its use of personal data and biometrics of individuals. The investigation is focused on Clearview's FRT app, which allegedly contains over 3 billion images "scraped" from social media platforms and other websites (for more information, please see our Talking Tech Article: "Caught in a scrape: new investigation reveals risks of facial recognition AI"). This joint investigation sets a precedent for the global reach and influence that data protection authorities will have to enforce and uphold data protection rights and may increase the potential liability faced by companies where data is transferred internationally.

In June 2020, the U.S. Senate announced the introduction of the Facial Recognition and Biometric Technology Moratorium Act, which places a prohibition on the use of FRT by federal entities and which can only be lifted with Congress approval. The Act will also prohibit the use of federal funding for biometric surveillance systems.

The European Commission's White Paper on Artificial Intelligence (published in February 2020) suggests support of the implementation of a moratorium on the deployment of automated recognition of human features in public spaces within the European Union to allow for (i) an informed and democratic debate on appropriate use of FRT, and (ii) the Member States to implement appropriate safeguards, including a comprehensive legal framework to guarantee the justification and proportionality of the respective technologies and systems for the specific use case. The European Commission's next round of consultation will close in September 2020, and the proposed new rules could be introduced during the first quarter of 2021.

The results of the measures posited by the U.S. Senate and European Commission respectively are still yet to be seen – and there is still a long way to go to developing a symbiotic relationship for FRT to ensure that any use of FRT is subject to clear guidance on what can be considered strictly necessary deployment of FRT and safeguards the rights and freedoms of individuals in the future.

 

Uche Eseonu, Trainee Solicitor, contributed to the writing of this article