Skip to main content

Clifford Chance

Clifford Chance

Data

Talking Tech

Tech Policy Horizon Scanner

October 2023

Artificial Intelligence Data Privacy 31 October 2023

Introduction

Welcome to the October edition of our Global Tech Policy Horizon Scanner. The UK is in the spotlight this month, hosting the AI Safety Summit in Bletchley Park, on 1-2 November. We are co-hosting the accompanying AI Fringe, which is being  live-streamed here – please do join us.

In significant developments this month, China has published a draft of its Global AI Governance Initiative for consultation whilst India announced the first draft of its IndiaAI report, and trilogue negotiations for the EU Artificial Intelligence Act continued to make progress.

In this edition, we also look at South Korea's public feedback on its Personal Information Protection Act amendments and Australia's revised consumer data enforcement guidelines. In the UK we saw six leading AI companies respond to the government's safety policy requests, and in the Americas, Meta is facing a landmark complaint, accusing it of creating intentionally addictive platforms. In the Middle East and Africa, we look at the UAE's metaverse framework and Kenya's taskforce on Data and AI Guidelines.

APAC (excluding China)

South Korea's Personal Information Protection Commission seeks public feedback on a draft directive for amending the PIPA and PIPA Enforcement Decree

On 27 September 2023, South Korea's Personal Information Protection Commission (PIPC) announced its guidelines on the 2023 revisions to the Personal Information Protection Act (PIPA) and the PIPA Enforcement Decree and has opened these up for public input. The draft guidance summarises the key legislative rules and issues that personal information handlers should be aware of, as well as the specifics and purpose of the PIPA revisions that will take effect in 2023. It is noteworthy that the PIPC said that the draft guideline was developed to raise awareness of the PIPA modifications for all concerned stakeholders.

The draft guidance is set to be finalised in December 2023, and public comments can be submitted via email to the PIPC at jzoos77@korea.kr.

Japan's National Center of Incident Readiness and Strategy updates Q&A Handbook on cybersecurity laws and regulations

On 25 September 2023, Japan's National Center of Incident Readiness and Strategy (NISC) published version 2.0 of their Cybersecurity-Related Laws and Regulations Q&A Handbook. This explains the pertinent laws and regulations that should be referred to in cybersecurity measures, and reflects developments in the cybersecurity environment. The NISC has stated that new edition of the handbook covers issues including responses from authorities when a cybersecurity incident occurs, regulations regarding authentication and identity verification, and the deletion of data and disposal of devices and electronic media on which data is recorded.

India's Ministry of Electronics & Information Technology submits first edition of IndiaAI Report

On 16 October 2023, seven working groups of India's Ministry of Electronics & Information Technology published the first edition of the IndiaAI report, which highlights the core goals of the IndiaAI program acting as a "guiding roadmap" for the growth of India's AI ecosystem. As well as providing skills programs and supporting startups, the IndiaAI program will comprise of the India Datasets Platform (a large collection of anonymised datasets that can be used by Indian researchers to train multi-parameter models) and the India AI Compute Platform (creating substantial graphics processing unit capacity for startups and researchers).

Australian privacy and competition watchdogs issue revised consumer data enforcement guidelines

On 12 October 2023, the Australian Competition & Consumer Commission (ACCC) and the Office of the Australian Information Commissioner (OAIC) published their updated compliance and enforcement guidelines for the Consumer Data Right. The 10-page report lists behaviours that are likely to attract enforcement action, such as inadequate data quality, lax security protocols, and misuse of Consumer Data Right (CDR) data. The CDR gives people the power to share their personal information that is kept by companies and to approve the secure transfer of that information to reliable third parties. The policy was last revised in May 2020.

China

Cyberspace Administration of China consults on draft on AI governance initiative

On 18 October 2023, the Cyberspace Administration of China (CAC) published the Global AI Governance Initiative, which highlighted the rapid development of global AI technology and associated risks, and proposed to, among other things: (1) respect other countries' state sovereignty and strictly abide by laws when offering AI products and services; (2) establish and improve laws and regulations to ensure privacy and data security in the research and development and the application of AI; (3) establish and promote a testing and assessment system based on the risk levels of different AI products; and (4) adhere to the principles of fairness and non-discrimination. Furthermore, the CAC called on all countries to increase information sharing and technological cooperation on the governance of AI in order to jointly prevent risks, and develop AI governance frameworks, norms, and standards to make AI technologies more secure, reliable, controllable, and equitable.

China's Ministry of Industry and Information Technology consults on draft for data security risk assessment in the field of industry and information technology

On 9 October 2023, China's Ministry of Industry and Information Technology (MIIT) published the consultation draft of the Implementing Measures for Data Security Risk Assessment in the Field of Industry and Information Technology (for Trial Implementation) (the Implementing Measures), aiming to outline the specific requirements around the risk assessment stipulated under the "Administrative Measures on the Data Security in the Field of Industry and Information Technology (for Trial Implementation)" issued last December. The Implementing Measures apply to the security assessment conducted by the processors of important data and core data in the field of industry and information technology, who are required to focus on the purpose and manner of data processing activities, business scenarios, security safeguards and risk impact during risk assessments.

China's Information Security Standardisation Technical Committee (TC260) consults on draft of the technical document for generative artificial intelligence service

On 11 October 2023, China's National Information Security Standardization Technical Committee (TC260) published the consultation draft of the technical document of the Basic Security Requirements for Generative Artificial Intelligence Service (the Technical Document). The Technical Document provides basic security requirements on generative AI services provided within China including corpus security, model security, security measures and security assessment.

EU

European Commission recommends carrying out risk assessments on four critical technology areas

On 3 October 2023, the European Commission (EC) published a Recommendation on critical technology areas for the EU's economic security, for further risk assessment with Member States. The Recommendation identifies ten critical technology areas, including four areas highlighted as the most sensitive and with immediate risks related to security and technology leakage: (i) advanced semiconductors, (ii) artificial intelligence, (iii) quantum technologies, and (iv) biotechnologies. It suggests that Member States, in collaboration with the EC, carry out collective risk assessments in these four areas by the end of 2023. An open dialogue will be conducted between the EC and Member States to determine the appropriate timing and scope of future risk assessments, and the EC is expected to propose any new initiatives by spring 2024.

European Commission recommends coordinated response to illegal online content and adopts rules on independent audit under the Digital Services Act

On 18 October 2023, the European Commission (EC) issued a set of Recommendations for Member States to coordinate their response to incidents that may increase the spread of illegal content online. The Recommendations aim to ensure that Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) designated under the Digital Services Act (DSA) comply with their obligations to adopt mitigation measures as regards illegal content. The Recommendations encourage (i) VLOPs and VLOSEs to draw up incident protocols and (ii) Member States to designate independent authorities to work with the Digital Services Coordinators network ahead of the full application of the DSA on 17 February 2024.

On 20 October 2023, the EC adopted a Delegated Regulation on independent audits to assess compliance of VLOPs and VLOSEs with the DSA. The DSA obliges independent auditors to assess, at least once a year, the compliance of VLOPs and VLOSEs with all DSA obligations. The EC provides that the designated VLOPs and VLOSEs should be subject to their first audit no more than 16 months after their designation, and will be required to transmit the audit report to the EC and the competent authority in their Member State of establishment.

EDPS releases final recommendations for Artificial Intelligence Act

On 24 October 2023, the European Data Protection Supervisor (EDPS) published its Final Recommendations on the proposed Artificial Intelligence Act (AI Act). As trilogue negotiations on the Act entered their final stage, the EDPS emphasised the need to prohibit AI systems that pose unacceptable risks to individuals and their fundamental rights. This includes: banning AI systems for automated recognition of human features and behavior in public spaces; the categorisation of individuals based on biometric features; and providing individuals affected by the use of AI systems with the right to lodge a complaint before a competent authority. The EDPS also welcomed the establishment of the European Artificial Intelligence Office to centralise the enforcement of the AI Act and to harmonise its application across Member States.

UK

Leading AI companies respond to UK government request for safety policies

On 27 October 2023, six leading AI companies (Google DeepMind, Anthropic, OpenAI, Microsoft, Amazon and Meta) published their safety policies following a request from the UK Government in September, which aims to increase transparency and information sharing of best practice. The companies were requested to outline their AI safety policies across nine areas: responsible capability scaling, model evaluations and red teaming, model reporting and information sharing, security controls including securing model weights, reporting structure for vulnerabilities, identifiers of AI-generated material, prioritising research on risks posed by AI, preventing and monitoring model misuse, and data input controls and audits.

To complement these safety policies, the UK Government has also published its Emerging Processes for Frontier AI Safety policy paper, which outlines a potential list of frontier AI organisations' safety policies and a framework for managing frontier AI risk. The paper is intended to be an early contribution to the discussion and is anticipated to require further updating as technology develops.

UK's Online Safety Act becomes law

On 26 October 2023, the UK's Online Safety Act received Royal Assent, following extensive debate within the House of Commons and House of Lords. The UK government describes the new rules placed on social media platforms as "world-first", and the Act is set to make the UK the safest place in the world to be online. The Online Safety Act requires social media and tech companies to prevent and rapidly remove illegal content as well as prevent children from viewing harmful material.

Ofcom has published a paper on their approach to implementing the Online Safety Act, stating that they will provide guidance and codes of practice in three phases: Phase One, focusing on illegal harms and duties (November 2023), Phase Two, focusing on child safety, pornography and the protection of women and girls (December 2023) and Phase Three, on transparency, user empowerment and other duties on categorised services (early 2024).

UK's ICO publishes guidance to ensure lawful monitoring in the workplace

On 3 October 2023, the UK's Information Commissioner's Office (ICO) released guidance to discuss the monitoring of workers by employers and the links to data protection. The guidance aims to help provide greater regulatory certainty, protect workers' data protection rights, and help employers build trust with workers, customers and service users.

The guidance also provides practical advice to help employers comply with the UK GDPR and UK Data Protection Act 2018. Having conducted research and found that 70% of the public would find it intrusive to be monitored by an employer, the ICO recommends that an organisation take steps to monitor workers by implementing steps including making workers aware of the nature, extent and reasons for monitoring, having a clearly defined purpose and using the least intrusive means to achieve it, and having a lawful basis for processing workers' data such as consent or legal obligation.

Americas

US State Attorneys General file landmark complaint against Meta for creating intentionally addictive platforms

On 24 October 2023, 33 US state attorneys general and the District of Columbia filed a complaint against Meta Platforms Inc. (Meta) which owns Instagram, Facebook and WhatsApp. The complaint alleged that Meta created intentionally addictive platforms to manipulate children and teenagers, and accused Meta of collecting data from minors without parental consent. In addition to the federal lawsuit, eight individual state attorneys general have brought similar state-level lawsuits against Meta.

Google and Microsoft announce indemnification for generative AI in IP infringement

On 13 October 2023, Google announced that it would defend users of its generative AI against any intellectual property infringement lawsuits. The company released a statement on their Google Cloud platform outlining the extent of their AI indemnification, and specifying the company's focus on protecting users and promoting innovation. Further, when Microsoft launched its AI product, Copilot, in September 2023, it made a similar announcement that it would assume the risks of any users against IP infringement claims.

US Consumer Financial Protection Bureau accelerates progress on open banking

On 19 October 2023, the US Consumer Financial Protection Bureau (CFPB) proposed the Personal Financial Data Rights rule that would accelerate a shift toward open banking in the US – a major development for the US, as many countries worldwide are already ahead in their open banking approach. The CFPB notes that the new rule would enable consumers to have control over data about their financial lives and gain new protections against companies misusing their data. Among other provisions, the proposed rule would give consumers the right to share their data and require certain practices and protections in the handling of data by financial firms.  The proposed Personal Financial Data Rights rule is open for public comment until 29 December 2023.

Middle East

The UAE's Artificial Intelligence Office releases white paper on the Responsible Metaverse Self-Governance Framework

On 9 October 2023, the UAE's Artificial Intelligence, Digital Economy and Remote Work Applications Office and Dubai's Department of Economy and Tourism launched a white paper, titled the "Responsible Metaverse Self-governance Framework". The paper discusses self-regulation principles within the metaverse, its key uses, and challenges such as data protection, privacy, digital well-being, and intellectual property protection. The report outlines nine self-regulatory principles for the metaverse, including interoperability for access, privacy by design, sustainability by design, trust, fairness, equality, inclusion, diversity, accountability, and safety by design. These principles are aimed at building trust, credibility, ethical practices, and user protection, ensuring seamless data transition across diverse platforms.

DIFC launches consultations on the new Digital Assets Law, new Law of Security and amendments to related legislation

On 28 September 2023, the Dubai International Financial Centre (DIFC) announced a new Digital Assets Law and new Law of Security regime. The new Digital Assets Law outlines the legal characteristics of digital assets, their proprietary nature, and how they can be controlled, transferred, and dealt with by interested parties. The DIFC aims to provide a comprehensive legislative framework for such legal characteristics.

The new Law of Security is modelled on the UNCITRAL Model on Secured Transaction and adapted to DIFC's specific requirements. DIFC proposes to repeal the current Law of Security (enacted in 2005), as innovations have emerged since then, such as credit in or secured by digital asset collateral agreements, where the new legislation will provide greater clarity in taking security over digital assets. The DIFC also proposes to repeal the current Financial Collateral Regulations, instead amalgamating financial collateral provisions into the new Law of Security.

The proposed legislative changes in Consultation Papers have been posted for a 40-day consultation period, where public comments are welcome until 5 November 2023.

Africa

Media Council of Kenya announces taskforce on Data and AI Guidelines

On 9 October 2023, the Media Council of Kenya (MCK) announced a taskforce developing guidelines on the application of AI to data and social media, to ensure appropriate and ethical integration into professional journalism. The MCK stated that the taskforce would be drawn from the media, technology, academia and legal fields, and would consider the "benefits and threats of the new technologies, make recommendations on ethical considerations that will help improve the quality of journalism and integrate the use of data in journalism". Over three months, the taskforce will develop three key documents: "Journalists' Handbook for Reporting Artificial Intelligence and Data", "Media Guidelines on the Use of Artificial Intelligence and Data" and "Ethical Guidelines on the Use of Social Media and the Internet by Journalists and Media Houses".

The CEO of MCK, David Omwoyo, stated that the media "have a duty to provide accurate information and coverage on matters data and AI and their implications in the daily lives of Kenyans, their government and other actors. A clear strategy for media capacity building and ethical guidelines are needed". Following the announcement of the taskforce, the MCK also reaffirmed its commitment to collaborate with Kenya's Office of the Data Protection.

Kenyan Office of the Data Protection Commission issues guidance on consent

On 29 September 2023, Kenya's Office of the Data Protection Commission (ODPC) published guidance on consent, particularly detailing the data subject consent requirements under its Data Protection Act 2019 (the Act). Although the guidance does not specify any consent measures that data processors or controllers must comply with, they have an obligation to prove that valid consent was obtained from the data subject prior to processing their personal data, and that new and specific consent must be sought if the purpose of data processing is changed. The guidance also emphasises that data processors and controllers must be consistent in their application of lawful bases for processing – they cannot retrospectively rely on a different lawful basis to justify processing.

Nigeria admitted as member of the Global Privacy Assembly

It was reported that, on 20 October 2023, during the 45th Global Privacy Assembly in Bermuda, Nigeria was appointed to the Global Privacy Assembly, recognising its advancements in the data privacy protection sphere including the assent of the Nigerian Data Protection Bill in June 2023 and the establishment of the Nigeria Data Protection Commission.

Established in 1979, the Global Privacy Assembly provides a platform to advance international efforts in this space. Currently comprising of 130 countries, Nigeria's designation signals that it is a country with adequate data protection provisions. The Nigerian Data Protection Commissioner, Dr Vincent Olatunji, has encouraged stakeholders to maintain a "high standard of care in handling personal data" and called for the continuation of efforts to increase confidence in Nigeria's emerging digital economy.

Additional Information

This publication does not necessarily deal with every important topic nor cover every aspect of the topics with which it deals. It is not designed to provide legal or other advice. Clifford Chance is not responsible for third party content. Please note that English language translations may not be available for some content.

The content above relating to the PRC is based on our experience as international counsel representing clients in business activities in the PRC and should not be construed as constituting a legal opinion on the application of PRC law. As is the case for all international law firms with offices in the PRC, whilst we are authorised to provide information concerning the effect of the Chinese legal environment, we are not permitted to engage in Chinese legal affairs. Our employees who have PRC legal professional qualification certificates are currently not PRC practising lawyers