AI Resources – Replacing the "Human" in Global Recruitment
AI objectivity is only skin (or rather algorithm) deep and AI is currently far more human than we might think.
This article looks at the potential use of artificial intelligence (AI) in the recruitment process, the problems of bias, and how businesses can mitigate these risks.
Whilst the use of artificial intelligence (AI) in recruitment is still in the (relatively) early stages, many businesses are adopting AI to perform tasks previously reserved for HR professionals in the hope of providing: -
- more targeted talent scouting
- a streamlined application process
- additional layers of analysis at the interview stages
This ultimately allows consistent, effective and efficient recruitment processes on a global scale.
AI works by extracting "meaning" from data – initial rules are set in algorithms, applied to populations of data and then as further data is added the machine "learns" further correlations and makes data driven predictions, for example, identifying potentially "desirable employees" from CV information. The potential time and cost efficiencies of replacing humans with machines are alluring and early adoption can provide a competitive edge. However, the potential liabilities in the race to automate should cause pause for thought.
Many countries have legislation prohibiting discrimination (whether direct or indirect) based on certain core characteristics (usually a combination of age, disability, marriage, pregnancy and maternity, race, religion or belief, sex, gender reassignment or sexual orientation), including prohibiting discrimination during the recruitment process. At first glance, the conscientious use of AI appears to assist compliance by ostensibly removing these core characteristics from any consideration; instead applying certain prescribed criteria to filter for a "desirable employee" from the data, seemingly removing human bias, guaranteeing consistency and objectivity and identifying the most desirable candidates.
AI bias
Unfortunately, this objectivity is only skin (or rather algorithm) deep and AI is currently far more human than we might think. Bias (whether conscious or unconscious) can creep in through the humans deciding and/or coding the rules and the prejudices contained in the pre-existing data those rules are applied to (data born from a prejudiced society can reflect those prejudices on its application). Where this produces a discriminatory outcome, this may raise material liability in the form of discrimination claims, claims under the GDPR which requires data controllers to take measures to prevent a discriminatory effect on natural persons, reduce workforce diversity and cause reputational damage.
Direct discrimination may be prevented by ensuring the AI is not "fed" rules and/or data that intentionally or unintentionally discriminate against characteristics which are protected under legislation in the jurisdictions a business operates in. HR can ask these questions in the procurement due diligence of the AI services contract and seek warranties that algorithmic fairness has been achieved and the underlying algorithms are not directly discriminatory and indemnities in the event of claims.
There are already a few notorious examples of where AI has resulted in indirect discrimination which demonstrate how this is a much more difficult risk to handle:-
- A tech giant's recruitment tool trained on a dataset of successful candidates over the prior 10 years (a predominantly male pool) taught itself that sex was a factor of success and women were therefore inferior applicants.
- Natural Language Processing Tools which extract details from text and vocabulary to make judgements about candidates or provides translations of CVs in different languages may mark down colloquialisms or unusual language patterns which non-native English speakers may exhibit or provide mistranslations which result in detriment.
- Facial recognition tools, developed to detect eye contact, facial expressions and body language and assess confidence and cultural fit can struggle to work with faces of ethnic minorities or take into account physical and mental disabilities.
The list will grow with the ever evolving new uses to which data is put, but so long as the coding teams and the data itself are products of an unfair society, the risk for bias remains.
Mitigating these risks
How can employers mitigate these risks when the complex algorithms and their workings are generally opaque to the layman, AI providers may be unwilling to open their valuable IP to inspection and unintended consequences may not always immediately surface?
The ICO office suggests in its blog on auditing AI bias and discrimination that organisations:
"should determine and document their approach to bias and discrimination mitigation from the very beginning of any AI application lifecycle so that the appropriate safeguards and technical measures can be taken into account and put in place during the design and build phase."
HR managers adopting AI should be:-
- trained on discrimination law to spot the potential risks.
- carry out or commission third-party auditors to conduct robust testing prior to implementation and subsequently conduct considered impact assessments on the data processing.
- be ready to challenge an AI provider on their product during the procurement process and be conscious to ensure where possible that there is adequate contractual recourse.
The Confederation of British Industry (CBI) is similarly concerned by this risk to the adoption of AI and published an article on 8 August 2019 calling on AI providers to follow the CBI's "three E's":-
- Embed: embed ethics through governance processes.
- Engage: empower staff to engage with AI, challenge any unfair bias in data and ensure diversity in the workforce.
- Explain: make sure consumers understand their product and know when decisions are taken by AI and how their data is being used to make decisions.
Already live to the threat to their services, many big data companies are already bringing new tools to market which promise to look "underneath the hood" to detect and help avoid bias in algorithms and may be a valuable tool in the risk mitigation process (IBM and Google have launched their tools, Microsoft and Facebook have announced intentions to release their own).
The CDEI review into bias in algorithmic decision-making
The Centre for Data Ethics and Innovation (CDEI) is an independent advisory body, led by a board of experts, set up and tasked by the UK Government to investigate and advise on how to maximise the benefits of data driven technologies. On 25 July 2019 the CDEI published an Interim report: "Review into bias in algorithmic decision-making". In this report it acknowledged that:
"the vendors of algorithmic recruitment tools, such as employment assessments to screen candidates, are exploring bias mitigation approaches but lack clear guidance on how to develop these."
The CDEI will survey existing tools and practices, supporting companies to understand the range of possible bias-mitigation approaches available and to shape industry standards and best practice. The CDEI expressly acknowledges that the AI sector has no clear regulator and will accordingly consider with stakeholders the potential governance arrangements for overseeing the mitigation of bias across this sector. The CDEI is expected to submit a final report to the Government in March 2020. Each organisation's HR, IT procurement and legal teams should ensure that they familiarise themselves with the CDEI recommendations on tools and techniques to mitigate bias.
The Future
It is clear that AI has the potential to positively disrupt recruitment processes globally. However, as diversity is becoming a board room issue it is imperative that HR are cautious and well informed in their adoption of AI services in order to navigate the potential issues new technology may trigger. As long as AI reflects its makers and their world, it is difficult to see a future in which these issues are fully resolved and this area will likely be a focus for regulation and litigation in the future.