Do you know what your AI is doing?
Organisations using AI must understand how their AI works and be able to explain it to the public and to regulators.
Artificial Intelligence is the new "must-have" for many businesses and other organisations, but there are concerns that it is being misused and may sometimes be discriminatory. When challenged, some organisations have been unable to address those concerns, suggesting that they do not properly understand how their AI is working.
AI is everywhere, making decisions that can affect all of us. It often involves algorithms, which are seen as excitingly futuristic or deeply sinister, depending on your point of view, but press coverage has highlighted allegations of discrimination against various groups. Sometimes this may happen because systems have been trained with incomplete data sets, but in other cases, particularly those which involve "deep learning", it may not be clear exactly what the algorithm is doing, even to the people running it. Attention so far has mostly been focused on the use of AI by public authorities, but any organisation can be challenged by its customers or others to explain exactly how its AI is being used.
This can be difficult if an organisation has bought the AI software from a third-party supplier, and either does not know how it works or is contractually prohibited from disclosing that information. There may also be issues if the AI was designed in-house but its creator has left and no-one can explain it to the legal or regulatory teams. Even if an organisation has information that it can provide, it can be difficult to translate the technical language into an explanation that a lay person can understand. This is also a reason why there can be scepticism about ethics policies related to AI. Responsibility for the policies may lie with a different part of the business than the part creating, or purchasing, the AI, but it is crucial that everyone in an organisation understands what the policies say and that there is a clear route to raise any concerns, or issues of non-compliance.
The Information Commissioner's Office is coming to the end of a consultation on its AI Auditing Framework, with a view to publishing its formal consultation paper in January 2020. The ICO focuses on Data Protection aspects of the use of AI. It notes that "it is important that organisations do not underestimate the initial and ongoing level of resources and effort" that will be required in managing the risks associated with AI and warns against using Data Protection Impact Assessments as a "mere box ticking compliance exercise."
The Centre for Data Ethics and Innovation published a review into bias in algorithmic decision-making over the summer, and suggested that choices made about the use of algorithms (for example, trade-offs between fairness and accuracy) will have to be considered on a sector by sector basis, and that "new functions and actors, such as third party auditors, may also be required to independently verify claims made by organisations about how their algorithms operate."
Supreme Court Justice Lord Sales recently suggested an "algorithmic audit" of code before it is put into operation. For government departments, this might involve an impact assessment, similar to those required by law for environmental and equality issues. Lord Sales also referred to the pre-legislative scrutiny of Acts of Parliament as offering a starting point for an appropriate model. But these steps would require detailed information about the algorithm to be available to the reviewers, which may not be attractive to those whose algorithms are being reviewed. Lord Sales suggested that an expert commission "could be given access to commercially sensitive code on strict condition that its confidentiality is protected", but we would question whether that is realistic, given the reach of Freedom of Information legislation. Even if government departments and other public bodies could be compelled to take part in a pre-deployment audit process it is less clear how the private sector could be required to do so.
At present, there are many suggestions but no firm process for organisations to follow when adopting AI. However, they could usefully think about their answers to the following questions:
- How is AI being used within our business - both internally and with our clients?
- Are we buying in or building AI technology in house?
- Are our legal and regulatory teams working closely with our procurement teams and technical teams to ensure that our use of AI is lawful and ethical?
- Have all of the above been part of the drafting or adoption of any AI-related policies, such as ethical frameworks and product approval policies?
- Who checks compliance with the policies and audits the AI use, and how often should it be done?
- Are our public statements about our use of AI keeping up with changes in our use of AI?
- How would we explain the way our AI works, and justify its use, to a regulator, authority or judge?
Uncertainty about the answers to any of these questions should prompt organisations to review their use of AI and to use it, and communicate about it, in a way that is more transparent and responsible and therefore less open to challenge.