Skip to main content

Clifford Chance

Clifford Chance

Cyber

Talking Tech

Data poisoning and AI transparency

Artificial Intelligence Cyber Security 6 October 2023

In this article, part of behavioural insights practice Canvas8’2024 Expert Outlook – an annual report helping businesses to understand and navigate behavioural change in the year ahead – Herbert Swaniker, a senior lawyer at Clifford Chance, charts the new challenges for cyber security.

SCOPE

The rapid rise of generative AI tools – from large language models to services for generating images, videos, and music – means that the conversation about technology has never been so far-reaching, tapping into people’s fears and hopes for the future. According to a Canvas8 survey conducted in June 2023, 38% of Britons and 41% of Americans feel somewhat or very worried about AI. The technology has permeated all sectors, promising to redefine education, creativity, and careers. In fact, 73% of US consumers think AI could improve the customer experience, particularly in digital settings, and in the UK, 42% of employees who use AI at work are reluctant to inform their managers. AI is enabling creativity and even a new sense of community through user-generated content – but it is also highlighting the importance of control over one’s data as people demand more transparency from companies making these tools. In 2024, the EU’s AI Act is on the horizon, which will determine the next steps.

Following the release of Apple’s Vision Pro, people are interrogating the potential of spatial computing and wearable technology both for the enterprise and their private lives. It also highlights the demand for more sensory engagement with technology that goes beyond the screen, potentially enhancing accessibility. In 2024, there will be a growing awareness of the complexity of people’s relationships with technology and its impact on our health, mental wellbeing, and outlook on the world. Building a holistic understanding of these tools is fundamental.

DATA POISONING AND AI TRANSPARENCY

Herbert Swaniker is a senior lawyer at the international law firm Clifford Chance. He advises global organisations on technology-related issues and policy matters, particularly concerning emerging technologies, artificial intelligence, cyber security, privacy, and commercial law.

WHAT IS THE BIGGEST THREAT TO THE STATUS QUO IN 2024?

Cyber incidents will become much closer to home, more challenging to control, and global. The status quo will be challenged because there's increased availability of connected devices and AI-enabled solutions. As one example, Business Insider has predicted that there will be 64 billion ‘Internet of things’ devices installed around the world by 2026. Things like AI require huge amounts of data, so that increases cyber risk.

One particular example is data poisoning, where malicious actors corrupt AI models with biased data to control them and create unintended outcomes, and this will be a big cyber security concern. In the safety context, if you manipulate traffic signals and traffic lights, it can cause safety risks. But there's bias risk as well – it can even threaten democracy. The uptake of connected devices will continue through 2024, but with that, there will be this increased threat from a cyber perspective. There will be much more public awareness around this. We'll need to do a lot more education, both for consumers and businesses.

2024 will see the realisation of a new wave of digital regulations and laws. Importantly, it is going to change our experience of online spaces and how you use technologies like AI specifically. It's really clear that people want appropriate guardrails and standards for AI, so I'm expecting more collaboration between government and industry to develop these rules. We've seen a lot of that in the social discourse, like companies wanting to provide their input, to be able to make sure that rules are fit for purpose, and understand how they can design AI in a safe way.

The AI Act will come into effect in Europe. It's the biggest shift in a generation on AI law. It's really significant. These rules will change how AI is designed, how it's deployed, and I suspect will be a bit like a privacy renaissance that we had in the late 2010s with GDPR. It will be a real moment where people have to interrogate how they will comply with these rules. There will be more of a dialogue between AI deployers, the people who distribute the AI, and the end user.

There will be specific controls on things like transparency. We can have discussions about bias, for example, but it will be the first time that there will be AI-specific laws to say that you need to make sure the data you're using represents the communities that your tool serves. The biggest public shift will be around human oversight, and we’ll start to see organisations that develop or deploy AI being much more public around what those controls are, who is responsible, what those principles are, and being able to actually demonstrate those.

In terms of how businesses can stay competitive in 2024, it’s about knowing what AI you use, what AI you want to use, where you want to use it, and why. Having an awareness of all of those points on a basic level and then having a clear internal and external communication strategy to bring people along with your vision will be so important. That's the clarity of thought and execution that would distinguish winners because if you actually understand this technology and you're not just in a boardroom saying, ‘AI is great’, then you can actually speak to the people that will be using it or that may have challenges created by it.

There will be this new discourse that will open up next year, catalysed by things like these new rules. You will have to explain way more than you used to, you have to understand way more. I think that's a good thing. I do think it's a challenging thing, and that's why I think the winners will invest in this because it involves collaborating among your own organisation, with society, and with the people that you serve – and that takes time and effort.

This article was written by Anastasiia Fedorova and first published as part of Canvas8’2024 Expert Outlook . It is republished here with their kind permission.