Skip to main content

Clifford Chance

Clifford Chance

Data

Talking Tech

Tech Policy Horizon Scanner

January 2024

Artificial Intelligence Data Privacy 2 February 2024

Introduction

AI dominated the annual meeting of the World Economic Forum in Davos, and it continues to top legislative agendas. The EU continues to move regulation forward like there's no tomorrow, or as if there's a (June) election around the corner. The EU Data Act became law, with most of its provisions set to enter into force on 12 September 2025. The near-final text of the EU AI Act was leaked, and previewed rules governing general-purpose AI models (GPAIs) and a more aggressive timeline for implementation.

On 24 January 2024 the European Commission announced a new package of measures to support innovation and European start-ups and SMEs in the field of AI, including the access of European start-ups to supercomputers. This package also includes the decision to establish the AI Office within the European Commission - a key body for the implementation of the upcoming EU AI Act.

Safety and ethical concerns continue to be the subject of focused interventions. The Australian government published a response to their paper on the safe development and deployment of AI, and Israel and Saudi Arabia issued comprehensive guidelines. The Hong Kong Privacy Commissioner has issued practical guidance on data protection in AI such as ethical AI development advice for AI chatbot users. The UK's National Cyber Security Centre published a report which states that AI will almost certainly increase the volume and impact of cyberattacks in the next two years.

In the U.S., state privacy legislation is continuing to roll out. Utah's consumer privacy law came into effect on 31 December 2023, and New Jersey became the fourteenth state to pass privacy legislation. It was signed into law on 16 January 2024 and is due to become effective in January 2025.

In a big non-AI related development, the SEC approved the 10 spot bitcoin ETFs in the United States. Clifford Chance acted for VanEck, one of the those first 10.

APAC (excluding China)

Hong Kong Privacy Commissioner for Personal Data (PCPD) issues AI guidelines and organises joint conference on data protection in AI

On 15 January 2024, the Privacy Commissioner for Personal Data (PCPD) in Hong Kong highlighted the potential privacy risks of AI, including excessive data collection, opaque usage of data, reidentification risks, and concerns about data accuracy. Despite the absence of a comprehensive AI regulatory framework in Hong Kong, the Personal Data (Privacy) Ordinance and sector-specific laws provide some legal guardrails. To address these challenges and promote responsible AI use, the PCPD has issued practical guidance, emphasising the importance of data protection, user awareness, and proactive steps to secure personal information in AI interactions.

Australian government releases preliminary response to AI safety discussion paper

On 17 January 2024, Australia's Department of Industry Science and Resources (DISR) published the government's interim response to the DISR consultation on their discussion paper, "Supporting Responsible AI in Australia". The response reflects a commitment to fostering safe and responsible AI development, and encapsulates a strategic vision that acknowledges AI's economic and innovative potential while recognising the necessity for regulation to address the risks associated with high-risk AI applications. The proposed framework would involve proactive harm prevention measures, legal clarification and enhancement to protect citizens, international cooperation for safe AI practices, and investment to harness AI benefits.

Singapore Media Development Authority consults on Model AI Governing Framework for Generative AI

On 16 January 2024, the Infocomm Media Development Authority of Singapore and AI Verify Foundation requested public comments on a draft Model Governance Framework for Generative AI. This revises the initial 2019 framework to address contemporary risks associated with generative AI, such as hallucination, copyright infringement, and misalignment with human values. The draft framework proposes nine dimensions to ensure the responsible development, deployment, and management of AI systems, including data quality and source reliability, transparency in AI creation, and incident reporting mechanisms. This comprehensive approach seeks to establish robust governance that balances innovation with ethical considerations and public trust in AI technologies.

China

Chinese regulators jointly issue the "Data Elements ×" Three-Year Action Plan (2024-2026)

On 31 December 2023, China's 17 departments, including the National Data Bureau, the Cyberspace Administration of China, the Ministry of Science and Technology, and the Ministry of Industry and Information Technology, jointly issued the "Data Elements ×" Three-Year Action Plan (2024-2026). The Action Plan sets out the framework to be implemented by the end of 2026. This aims to enhance and improve the data application in various areas, including industrial manufacturing, modern agriculture, commercial distribution, transportation, financial services, technological innovation, cultural tourism, health care, emergency management, meteorological services, urban governance, green and low-carbon economy. The Action Plan also calls for enhanced support in data supply, data flow environment and data security.

China's Information Security Standardisation Technical Committee releases consultation drafts of practice guidelines for large internet platforms and two national standards

On 23 December 2023, China's National Information Security Standardisation Technical Committee (TC260) released the consultation draft of Cybersecurity Practice Guidelines – Cybersecurity Assessment Guidelines for Large Internet Platforms for public comment. The Draft Guidelines stipulate that large internet platforms must establish an assessment work group before organising and conducting security assessments. Also, they state that the assessment working group shall be responsible for rectifying problems discovered during the security assessment, and identifying cybersecurity issues that may arise from cross-border provision of service, cross-border data transfers, and different policies, legal systems and protection levels in different jurisdictions.

On 4 January 2024, the TC260 released the consultation draft of the Information Security Technology - Information Security Risk Management Guidance and the Information Security Technology -  Information Security Management System Requirements for public comment.

EU

EU Data Act enters into force

On 11 January 2024, the European Data Act entered into force. Most of the obligations will start to apply in the EU in 20 months, on 12 September 2025, with the exception of the obligation to make product data and related service data accessible to users, which will come into force one year later on 12 September 2026. The Data Act is the European regulation that aims to govern how data is accessed, shared, and ported among the different players: business to consumer, business to business, and business to government. It contains obligations regarding the design of connected products, data sharing, and provisions that will facilitate interoperability and switching between data processing services.

Draft text of EU Artificial Intelligence Act leaked

On 21 January 2024, the draft text of the EU's AI Act was leaked and widely circulated, ahead of a meeting of the Working Party on Telecommunications and Information Society on 24 January 2024, and of a vote by the Committee of Permanent Representatives reportedly scheduled for 2 February 2024. In December 2023, the EU institutions reached a provisional political agreement on the AI Act. The text will need to be finalised and approved by the EU Member States as well as the European Parliament.

The legislation introduces a comprehensive framework for General Purpose AI (GPAI) models, which are AI systems capable of performing a broad spectrum of tasks. Providers of GPAIs must adhere to specific rules, including maintaining technical documentation and respecting EU copyright laws. Additional stringent requirements are set for GPAIs identified as having systemic risks, such as mandatory notification to the European Commission and enhanced obligations for risk management and cybersecurity.

The AI Act incorporates ethical principles to encourage the development of trustworthy AI and requires all AI system providers to ensure AI literacy among their staff. A "filter system" is introduced to determine the high-risk status of AI systems, with certain systems being exempt from stringent scrutiny. High-risk AI systems must undergo a Fundamental Rights Impact Assessment, and measures must be taken to mitigate data biases. The Act also specifies transparency requirements for certain AI applications.

The implementation of the AI Act is phased, including the prohibited practices and general provisions of the AI Act coming into effect six months after enactment and obligations for providers of GPAI models coming into effect 12 months after enactment.

European Commission designates a second set of Very Large Online Platforms under the Digital Services Act

On 20 December 2023, the European Commission designated a second set of three very large platforms (VLOPs) under the Digital Services Act: Pornhub, Stripchat, and XVideos, after finding that these platforms exceeded the threshold 45 million average monthly users in the European Union. Once the Commission has designated a platform or a search engine, it has four months to takes measures to comply with the DSA obligations.

This designation follows the first set decision on 25 April 2023, where the European Commission designated 19 Very Large Online Platforms and Search Engines. Recently, the European Commission sent seventeen of them requests for information on the measures they have taken to comply with the obligation to give access, without undue delay, to the data that is publicly accessible on their online interface to eligible researchers.

All the online platforms will have to comply with all the DSA requirements by 17 February 2024.

European Supervisory Authorities publish the first set of rules under DORA for Information Communication Technology and third-party risk management and incident reporting frameworks

On 17 January 2024, the European Supervisory Authorities (EBA, EIOPA and ESMA) published the first set of final draft technical standards under the Digital Operational Resilience Act (DORA). The final draft follows the public consultation that took place between 19 June and 11 September 2023. These standards are designed to boost the digital operational resilience of the EU's financial industry, focusing on fortifying the management of Information and Communication Technology (ICT) risks and the frameworks for reporting incidents involving third parties.

The technical standards encompass Regulatory Technical Standards (RTS) that detail the framework for managing ICT risks and set out the criteria for categorising incidents related to ICT. They also include Implementing Technical Standards (ITS) that define the templates for the register of information. The final draft of these technical standards has been presented to the European Commission for review, with the aim of them being adopted in the coming months.

UK

Information Commissioner's Office launches new consultation series on generative AI

On 15 January 2024, the UK's Information Commissioner’s Office (ICO) initiated a series of consultations on generative AI, exploring how data protection law should be applied to the development and use of this technology. The consultations aim to assist the ICO in providing the industry with clarity regarding its obligations and in protecting people’s information rights and freedoms.

The ICO is inviting a range of stakeholders to share their views, including developers and users of generative AI, legal advisors and consultants in this field, civil society groups, and other interested public bodies. The first consultation will investigate the legality of training generative AI models on personal data obtained from the internet and will remain open until 1 March 2024. Future consultations, to be launched throughout the first half of 2024, will explore issues such as the accuracy of generative AI outputs.

UK government publishes draft Code of Practice on cybersecurity governance

On 23 January 2024, the UK government published a draft Code of Practice for cybersecurity governance aimed at improving the cyber defences of company directors and senior leaders. Developed in collaboration with industry directors, cybersecurity and governance experts, and the UK National Cyber Security Centre (NCSC), the Code provides a framework for organisations to prepare for and recover from cyber incidents. Recognising that cybersecurity governance requires a tailored approach, the Code outlines five key principles: risk management, cyber strategy, people, incident planning and response, and assurance and oversight, each accompanied by suggested actions. The government is actively seeking feedback on the draft Code from various sectors and has invited responses until 19 March 2024.

UK National Cyber Security Centre warms of increase in ransomware threats with AI

On 24 January 2024, the UK National Cyber Security Centre (NCSC) published a report predicting that AI will significantly increase the frequency and severity of cyberattacks in the next two years. The report indicates that AI is already being utilised by various cyber threat actors, both state-sponsored and independent, across different levels of expertise. It highlights that AI enhances abilities in reconnaissance and social engineering, making these tactics more effective and difficult to detect. Additionally, the report suggests that AI lowers the entry threshold for inexperienced cyber criminals, hackers-for-hire, and hacktivists. This is expected to escalate the global ransomware threat, as AI enables these actors to conduct more sophisticated attacks with less effort.

Americas

Utah Consumer Privacy Act comes into effect

On 31 December 2023, the Utah Consumer Privacy Act came into effect. Utah is the fifth state to enact a consumer privacy law, preceded by California, Connecticut, Colorado and Virginia. While many of the recently enacted state data privacy laws became effective (i.e., operative and enforceable) in 2023, a number of new state data privacy laws will come into effect in 2024 and in the coming years, and additional data privacy legislation is also likely on the horizon. In the absence of a comprehensive federal privacy law, it is imperative for companies to keep up with the evolving state legislation.

Read our individual overviews of currently enacted comprehensive state data privacy laws.

Spot Bitcoin exchange-traded products approved by SEC

On 10 January 2024, the U.S. Securities and Exchange Commission (SEC) approved the listing and trading of ten spot bitcoin exchange-traded product (ETP) shares. Since 2018, the SEC has denied over 20 bitcoin ETP filings. These disapprovals led to lawsuits such as Grayscale Investments, LLC v. SEC in 2023, wherein the U.S. Court of Appeals for the District of Columbia held that the SEC failed to sufficiently explain its denial of the Grayscale ETP.

The SEC chairman Gary Gensler stated that "While we approved the listing and trading of certain spot bitcoin ETP shares today, we did not approve or endorse bitcoin. Investors should remain cautious about the myriad risks associated with bitcoin and products whose value is tied to crypto."

Clifford Chance acted for VanEck on the registration and launch of their bitcoin ETF.

Middle East

Israeli ministries publish policy on AI regulations and ethics

On 17 December 2023, the Israeli Ministry of Innovation, Science, and Technology, in collaboration with the Office of Legal Counsel and Legislative Affairs at the Ministry of Justice, published the 2023 Policy on Artificial Intelligence Regulations and Ethics. The policy outlines strategic guidelines for the regulation of AI within the private sector and delineates sector-specific instructions, distinct from those being developed for public sector AI applications. It identifies key challenges such as discrimination, human oversight, and privacy, advocating for "responsible innovation" that ensures ethical AI development across its lifecycle, particularly in deployment.

Recommendations include: establishing a regulatory framework supporting sectoral regulators and international collaboration, adopting OECD-based ethical principles, and forming an inter-agency AI policy coordination centre to guide and harmonise regulatory efforts. Additionally, it encourages stakeholder engagement and active participation in international regulatory development.

Israel Privacy Protection Authority invites public comments on policy document on collection and use of biometric information in the workplace

On 17 January 2024, Israel's Privacy Protection Authority (PPA) published a policy document on the collection and use of biometric information for workplace attendance control. The document aims to update the position of the PPA under its 2012 "Use of Biometric Attendance Control in Workplace". The document is primarily intended for private and public organisations that use biometric identification technologies for employees. Employers are permitted to collect biometric data within the bounds of reasonableness, proportionality, and with explicit informed consent from employees. They must also adhere to statutory information security requirements and the principle of goal contiguity, ensuring biometric data is only retained as long as necessary and is appropriately reduced or deleted thereafter.

Saudi Arabia Data & Artificial Intelligence Authority publishes guide on generative AI

On 20 December 2023, the Saudi Data & Artificial Intelligence Authority (SDAIA) announced the publication of a guide on generative AI. The guide includes a summary of generative AI components, an overview of the steps to develop such models, the basic pillars for adoption, and an analysis of its impact. The SDAIA emphasised the benefits of generative AI which include enhancing productivity by improving employees' task completion abilities, offering a personalised user experience, increasing efficiency by automating repetitive tasks and operations in various fields, and enhancing innovation by generating new ideas for products or services.

Africa

Kenya Office of the Data Protection Commissioner publishes sector-specific guidance notes on data protection

On 8 January 2024, the Office of the Data Protection Commission in Kenya (ODPC) announced that it had published sector-specific guidance notes for the education and communication sectors, digital credit providers, and the processing of health data. Additionally, the ODPC published guidance notes on Data Protection Impact Assessment (DPIA), consent, registration of data controllers and data processors, and electoral purposes.

The guidance note on the registration of data controllers and data processors was developed to assist entities in determining if they are data controllers or data processors and to understand their obligations with respect to mandatory registration. The guidance note considers the Data Protection Act 2019, the Data Protection (Registration of Data Controller and Data Processors) Regulations, 2021, as well as international best practices. It also provides a checklist for organisations to determine whether they are controllers or processors and guidance on how to register.

The guidance note on DPIA aims to assist data controllers and data processors in understanding the risk of processing activities undertaken and to identify when a DPIA must be carried out and submitted to the ODPC. The guidance note provides a template for a DPIA and lists criteria to consider when determining which processing activities would require a DPIA.

Additional Information

This publication does not necessarily deal with every important topic nor cover every aspect of the topics with which it deals. It is not designed to provide legal or other advice. Clifford Chance is not responsible for third party content. Please note that English language translations may not be available for some content.

The content above relating to the PRC is based on our experience as international counsel representing clients in business activities in the PRC and should not be construed as constituting a legal opinion on the application of PRC law. As is the case for all international law firms with offices in the PRC, whilst we are authorised to provide information concerning the effect of the Chinese legal environment, we are not permitted to engage in Chinese legal affairs. Our employees who have PRC legal professional qualification certificates are currently not PRC practising lawyers