Skip to main content

Clifford Chance
Tech<br />

Tech

Talking Tech

Tech Policy Unit Horizon Scanner

March 2026

Artificial Intelligence Data Privacy Cyber Security 17 April 2026

March was notable because agentic AI stopped being treated as a future governance problem and started being regulated as a present operational risk. Across multiple jurisdictions, regulators moved from abstract discussions of autonomy and innovation to concrete warnings, draft standards and deployment guidance aimed at systems that can act, integrate and escalate without human instruction.

Hong Kong’s privacy regulator publicly flagged the risks of agentic tools with elevated system access; Chinese standard‑setters opened consultation on security guidelines for OpenClaw‑type agents; Japan’s financial regulator and Türkiye’s data protection authority both addressed agentic AI explicitly in sector‑specific guidance. Taken together, these interventions suggest growing concern not about AI decision‑making in isolation, but about what happens when systems are given authority to act across networks, data sets and services.

Elsewhere, March brought a similar move from principle to mechanics. Regulators focussed on labelling and transparency obligations for AI‑generated content in the EU, expectations on age assurance and automated recruitment in the UK, and practical certification, licensing and authorisation regimes for digital infrastructure and services across Asia, Africa and the Middle East. Enforcement activity, particularly in data protection and online safety, showed that these expectations are no longer just aspirational.

Regulators are narrowing in on how technologies are actually deployed, governed and supervised in practice, putting emphasis on access controls, lifecycle governance, contractual allocation of responsibility and human oversight.

THE REGIONS IN DETAIL

APAC (Excluding China)

Hong Kong

Hong Kong Regulator Warns of Privacy Risks from Agentic AI Tools like OpenClaw

On 16 March 2026, the Office of the Privacy Commissioner for Personal Data (PCPD) issued a warning regarding the heightened privacy and security risks associated with "agentic AI" tools, such as OpenClaw, which possess greater autonomy and higher system access than standard chatbots. The PCPD highlights that because these agents can autonomously read local files, manage emails, and execute payments without real-time user involvement, they pose significant threats including unauthorized data access, malicious system takeovers via unverified plugins, and accidental data deletion. To mitigate these risks, the PCPD recommends that organizations and individuals grant only the minimum necessary access rights, utilise official and updated software versions, isolate AI runtime environments, and maintain a "human-in-the-loop" approach for high-risk operations to ensure final control remains with the user.

Singapore

Singapore Updates AI in Healthcare Guidelines to Address Emerging Risks

On 10 March, Singapore’s Ministry of Health (MOH) and Health Sciences Authority (HSA) published Version 2.0 of the AI in Healthcare Guidelines (AIHGle 2.0). The update covers responsible AI lifecycle management, data governance, transparency, and fairness. It also addresses risks from generative and continuously learning AI, providing detailed recommendations for all stakeholders.

Singapore MAS and Industry Develop Toolkit for AI Risk Management in Finance

On 20 March 2026, the Monetary Authority of Singapore (MAS) announced the publication of an Artificial Intelligence (AI) Risk Management Toolkit for the financial services sector. Developed in collaboration with a consortium of 24 industry partners, the toolkit includes an "AI Risk Management Operationalisation Handbook" and a supplement of case studies to provide practical guidance on managing risks across traditional, generative, and emerging agentic AI technologies. This initiative reinforces MAS’s position that boards and senior executives are accountable for AI governance as the technology becomes more embedded in financial services.

Singapore and Japan Agree on Mutual Recognition of IoT Cybersecurity Labels

On 18 March 2026, Singapore and Japan signed a Memorandum of Cooperation to mutually recognize IoT cybersecurity certifications under the JC-STAR and CLS schemes. This agreement streamlines certification processes for manufacturers, strengthens IoT device cybersecurity standards, and expands market access in both countries. The MoC becomes effective 1 June 2026.

Japan

METI Issues Guidelines on the Roles of Cyber Infrastructure Providers

On 31 March 2026, Japan’s Ministry of Economy, Trade and Industry (METI) published the Guidelines on the Roles Expected of Cyber Infrastructure Providers. The Guidelines aim to strengthen cybersecurity by clarifying the respective responsibilities of cyber infrastructure providers and their customers in the development, provision, and operation of certain software and IT services. Under the Guidelines, cyber‑infrastructure providers are assigned defined roles across the development, supply, and operational phases. These include obligations to ensure secure design and development, manage cybersecurity risks within the supply chain, respond appropriately to vulnerabilities and incidents, and engage in timely information‑sharing. Customers are expected to undertake appropriate risk management, clearly specify security requirements, and ensure proper procurement and operation of systems and services. The Guidelines emphasise that roles, responsibilities, and cooperation frameworks between providers and customers should be clearly allocated and reflected in contractual arrangements.

Japan FSA Launches Consultation on Updated AI Discussion Paper for Finance

On 3 March 2026, the Financial Services Agency (FSA) of Japan published an updated AI Discussion Paper focused on the financial sector, highlighting risk mitigation strategies, governance, and emerging AI trends like agentic AI. The paper addresses challenges such as data readiness, explainability, fairness, and personal data protection. The FSA welcomes comments on the paper.

Vietnam

Vietnam Enacts Comprehensive Law on Artificial Intelligence

Vietnam’s new AI Law, effective March 1, 2026, establishes a national governance framework with a three-tier risk classification for AI systems. High-risk systems require human oversight, and the law mandates transparency and infrastructure development. An implementation plan covers training, legal audits, and a dedicated AI development fund.

Australia

Australia OAIC Provides Privacy Guidance for AML/CTF Reporting Entities

On 27 February 2026,  the Office of the Australian Information Commissioner (OAIC) released guidance reminding entities regulated under the Anti-Money Laundering and Counter-Terrorism Financing Act 2006 of their Privacy Act 1988 obligations. Key points include limiting data collection, maintaining transparency, securing data, having a breach response plan, and timely destruction or de-identification of personal data when no longer needed.

China

Chinese authorities issue Opinions on accelerating high-quality development of science and technology insurance

On 2 March 2026, the Ministry of Science and Technology and other agencies jointly issued the Opinions on Accelerating the High-Quality Development of Science and Technology Insurance. The document sets out measures to strengthen insurance support for major science and technology projects and SMEs, expand coverage in key regions, and promote innovative insurance products for emerging fields such as AI and quantum technology. It also encourages increased insurance investment in technology innovation and improved risk management.

TC260 seeks public comment on draft security guidelines for OpenClaw-type intelligent agents

On 31 March 2026, the National Technical Committee 260 on Cybersecurity of SAC published a draft Cybersecurity Standard Practice Guide - Security Guidelines for Deployment and Use of OpenClaw-type Intelligent Agents for public comment. The draft aims to help organisations manage security risks in deploying OpenClaw-type intelligent agents. Comments are requested by 15 April 2026.

Africa

Mozambique

Mozambique announces draft Personal Data Protection Law

On 5 March 2026, Mozambique’s National Institute of Information and Communication Technologies (INTIC) announced that the Council of Ministers approved and submitted to Parliament a draft Personal Data Protection Law. The draft establishes rules for the processing of personal data in physical and electronic formats and applies to public and private entities operating in Mozambique or subject to its jurisdiction. INTIC stated that the law would set out core data protection principles, including the general requirement of a data subject’s free and informed consent, and grant rights such as access, rectification and erasure. The framework would also require appropriate security measures, mandate breach notifications where applicable, and restrict international data transfers to situations where adequate protection is ensured.

Kenya

Kenya limits cybercrime law impact on online speech

On 6 March 2026, the Kenyan Court of Appeal partially allowed an appeal by ICJ-Kenya, BAKE, and Article19 against the Computer Misuse and Cybercrimes Act (2018), striking down Sections 22 and 23 which criminalized "false publications" and "fake news." The court ruled these sections were unconstitutionally broad "unguided missiles" that created a chilling effect on free speech, noting that existing civil defamation and national cohesion laws provide sufficient alternatives for redress. However, the court upheld the remainder of the Act, including investigative powers for search, seizure, and real-time surveillance—provided they are authorized by a judicial warrant—as well as provisions targeting child pornography, cybersquatting, and offenses requiring a "guilty mind" (mens rea).

Democratic Republic of the Congo

DRC introduces licensing regime for selected digital services

On 11 March 2026, two key ministerial orders were signed by the Minister of Digital Economy under the Digital Code requiring digital operators to obtain prior authorisation or submit declarations based on the risk level of their activities. The orders mandate five-year renewable authorisations for high-risk activities involving systemic risk or sensitive data—such as data centres, electronic payment platforms, and trust services—while lower-risk activities like online content services and local marketplaces are subject to a simpler digital declaration process. To ensure compliance, a transitional period is in effect until 30 June 2026, with full enforcement and sanctions for non-compliance beginning 1 July 2026.

Zimbabwe

Zimbabwe considers Social Media Ban for children under 18

On 8 March 2026, Information and Communication Technology Minister Tatenda Mavetera announced the government are preparing measures to restrict minors’ access to social media platforms as part of a draft Child Online Protection Policy aimed at protecting children from online harm. While the policy is still under development and implementation details have not yet been released, officials have indicated that stricter age‑verification requirements for social media companies may form part of the approach.

Europe

European Union

European Commission publishes second draft Code of Practice on marking and labelling AI‑generated content

On 5 March 2026, the European Commission published the second draft of the voluntary Code of Practice on the marking and labelling of AI‑generated content, intended to support compliance with the transparency obligations set out in Article 50 of the AI Act. The draft is structured into two sections, addressing respectively the obligations of providers of generative AI systems and deployers responsible for labelling certain AI‑generated content, such as deepfakes and text on matters of public interest.

EU launches a new initiative to build a sovereign federated telco‑cloud infrastructure

On 3 March 2026, the European Commission and a consortium led by European telecom and cloud players announced EURO‑3C, a large‑scale project aimed at building a federated European telco‑edge‑cloud infrastructure to strengthen digital sovereignty. Supported by Horizon Europe funding, the initiative seeks to interconnect existing telecom, edge computing and cloud infrastructures across Member States rather than creating a single EU hyperscaler. EURO‑3C is presented as a response to Europe’s continued dependence on non‑European cloud providers and past shortcomings of initiatives such as Gaia‑X. By pooling national capabilities and promoting interoperability, the project aims to offer secure, high‑performance digital services for strategic sectors while reinforcing EU autonomy in cloud and data infrastructure.

GDPR: CJEU rules that access requests made solely to trigger compensation claims may be refused as abusive

On 19 March 2026, the Court of Justice of the European Union held that a first request for access to personal data under Article 15 GDPR may, in certain circumstances, be regarded as “excessive” and refused if it is made solely for the purpose of artificially creating the conditions to claim compensation. The Court ruled that a controller may demonstrate abuse of rights where the request is not intended to verify the lawfulness of data processing, but rather to provoke an infringement and subsequent damages claim under Article 82 GDPR. The Court further clarified that a GDPR infringement does not automatically give rise to a right to compensation, which requires proof of actual material or non‑material damage.

EDPB and EDPS issue Joint Opinion on the proposed European Biotech Act

On 10 March 2026, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) adopted Joint Opinion 3/2026 on the European Commission’s proposal for a European Biotech Act. While supporting the objective of strengthening the EU’s biotechnology and biomanufacturing sectors and harmonising the regulatory framework for clinical trials, the EDPB and EDPS stress that the processing of health and genetic data requires a high level of protection. The Joint Opinion welcomes the introduction of a single EU‑level legal basis for the processing of personal data in clinical trials, but calls for greater clarity on controller roles, data retention periods, and safeguards for further processing, including for scientific research and AI‑enabled biotechnology. The EDPB and EDPS also emphasise the need to ensure coherence with the GDPR and the AI Act, and to avoid any lowering of data protection standards.

European Parliament adopts position on Digital Omnibus on AI Act

On 18 March 2026, the Committees on the Internal Market and Consumer Protection (IMCO) and on Civil Liberties, Justice and Home Affairs (LIBE) adopted a joint position on a proposal amending the Artificial Intelligence Act (Digital Omnibus on AI) as part of a simplification initiative.

On 26 March 2026, the European Parliament adopted its position on the proposal in plenary.

The adopted amendments include:

  • Obligations applicable to high‑risk AI systems listed in the Regulation would apply from 2 December 2027.
  • For AI systems subject to EU sectoral legislation on product safety and market surveillance, the application date would be 2 August 2028.
  • In addition, the Parliament supported a transitional period until 2 November 2026 for compliance with requirements relating to the labelling of AI‑generated audio, image, video and text content.

Following the adoption of the Parliament’s position, negotiations with the Council on the final text of the Digital Omnibus on AI are expected to begin.

United Kingdom

ICO Calls on Employers to Review Automated Decision‑Making in Recruitment

On 31 March 2026, the Information Commissioner’s Office (ICO) published a statement and report calling on organisations to review their use of automated decision‑making (ADM) in recruitment. The regulator identified recurring risks around transparency, discrimination, and over reliance on automated tools under UK GDPR. The report sets out clear regulatory expectations, emphasising genuine human oversight, improved transparency for candidates, active monitoring for bias and fairness, and the use of robust Data Protection Impact Assessments (DPIAs) where automated recruitment processes pose high risks to individuals’ rights.

ICO Fines Police Scotland for Serious Data Protection Failures

On 11 March 2026, the Information Commissioner’s Office (ICO) announced that they issued a £66,000 fine and reprimand to Police Scotland for serious failures in the handling of sensitive personal data. The ICO found that Police Scotland had extracted the entire contents of a complainant’s mobile phone in a manner it considered excessive and unfair and had subsequently disclosed unredacted information to a third party without adequate safeguards. The regulator also concluded that Police Scotland had failed to implement appropriate organisational and technical measures, limit data sharing to what was necessary, and notify the ICO of the breach within the required 72‑hour timeframe.

Ofcom and ICO Issue Joint Statement on Age Assurance under Online Safety Act and UK Data Protection Legislation

On 25 March 2026, Ofcom and the Information Commissioner’s Office (ICO) published a joint statement clarifying regulatory expectations for online services implementing age assurance under the Online Safety Act and UK data protection law. The statement sets out a shared, risk‑based and tech‑neutral approach, emphasising that age assurance methods must be effective, proportionate and compliant with UK GDPR, and that self‑declaration alone will not be sufficient where services are required to protect children from harm. It also reflects updated data protection requirements following the Data (Use and Access) Act 2025 and aims to promote consistent compliance across online safety and data protection regimes.

Ofcom Fines 4chan for Failing to Protect Children from Online Pornography

On March 19 2026, Ofcom announced they imposed a £450,000 fine on 4chan for failing to put in place effective measures to prevent children from accessing pornographic content on its service, in breach of its duties under the Online Safety Act. Ofcom found that the platform had not implemented appropriate age assurance or other safeguards to protect children from harmful content. The enforcement action underscores Ofcom’s expectation that platforms hosting pornographic content must take proactive and robust steps to comply with child‑safety obligations under the new online safety regime.

MHRA Announces International Regulatory Innovation Initiatives for AI and Advanced Therapies

On 25 March 2026, the Medicines and Healthcare products Regulatory Agency (MHRA) – the UK regulator for medicines and healthcare products – published a statement outlining its approach to supporting emerging healthcare technologies through enhanced international regulatory collaboration. The announcement highlights cooperation with Singapore’s Health Sciences Authority (HSA), including the signing of a refreshed Memorandum of Understanding and the launch of the UK-Singapore joint Regulatory Innovation Corridor pilot. The initiative is intended to support work‑sharing, early joint regulatory advice and regulatory alignment for novel technologies, including AI‑enabled medical products and advanced cell and gene therapies. The MHRA emphasised that the programme aims to accelerate patient access to innovative treatments while maintaining safety and trust.

House of Lords Library Publishes Briefing on Cyber Security and Government Resilience

On 19 March 2026, the House of Lords Library published a briefing examining cyber security challenges facing the UK Government and recent efforts to strengthen resilience across public services. The briefing outlines the evolving cyber threat landscape, highlighting the role of cybercriminals and state‑linked actors in targeting government systems, and summarises current government initiatives to improve preparedness and response. It situates cyber security as a key component of national security as reliance on digital services continues to grow.

Government Moves Away from Opt‑Out Model Following Parliamentary Scrutiny of AI and Copyright

On 18 March 2026, the Secretary of State for Science, Innovation and Technology released a statement confirming the Government’s intention to protect creators’ control over and remuneration for their works while supporting continued AI development. The statement follows concerns raised in a report on 6 March 2026 by the House of Lords Communications and Digital Committee about unlicensed AI training and transparency concerns.  In response to this, the Government will continue to consider evidence and policy options on digital replicas, labelling of AI‑generated content, strengthening creator control and input transparency, and supporting independent creatives in licensing their content.

Americas

The United States of America

White House AI Policy Framework Urges National Standard, Limits New Agency

On 20 March 2026, the White House released its National Policy Framework for Artificial Intelligence, a set of legislative recommendations to guide future federal AI policy. The Framework isn’t binding, but it outlines a push for a single national approach to AI governance. It emphasizes protections for children, support for innovation and workforce development, infrastructure and energy considerations related to AI deployment, and respect for intellectual property and free expression. The Framework recommends relying on existing regulatory authorities rather than creating a new federal AI agency. It also urges Congress to consider preempting certain state AI laws that impose undue burdens, while preserving state authority over generally applicable laws, zoning, and state government uses of AI.

Governor Newsom Executive Order Strengthens AI Vendor Rules and Responsible Use

On 30 March 2026, California Governor Gavin Newsom issued an executive order to tighten California’s AI procurement processes. Companies seeking state contracts must meet rigorous standards and demonstrate responsible policies to prevent misuse of their technology, while safeguarding user safety and privacy. The order directs the development of new contracting practices that vet companies safeguards against exploitation, bias, and civil rights violations, and allows California to separate its procurement process from federal standards if necessary. It also expands the use of generative AI to improve state services and mandates the creation of best practices for watermarking AI-generated content. Additionally, California is launching "Engaged California," a statewide initiative to gather public input on AI’s impact on the workforce and future policy.

Middle East

UAE

MoHRE Deploys AI‑Driven Governance Across the Labour Market

On 17 March 2026, the Ministry of Human Resources and Emiratisation announced the deployment of AI‑driven governance systems across the UAE labour market. Artificial intelligence is embedded in work‑permit processing, inspections and compliance monitoring, enabling more precise, data‑driven oversight of employers and workplaces. AI tools are used to verify identity documents, employment contracts and records, reduce fraud risks, and enhance occupational health and safety enforcement.

Turkey

KVKK issues guidance on the use of generative AI tools in the workplace

On 5 March 2026, Türkiye’s Personal Data Protection Authority (KVKK) published guidance on the use of generative AI tools in the workplace, focusing on risks arising from employees’ use of publicly accessible third‑party AI systems. The guidance highlights the phenomenon of “shadow AI”, where employees deploy generative AI without organisational approval or oversight, creating heightened personal data protection, cybersecurity, intellectual property and accountability risks. KVKK stresses that Law No. 6698 applies regardless of the technology used and that personal data processed through generative AI tools must comply with core data protection principles. Although non‑binding, the guidance sets out the authority’s expectations for governance, risk management and awareness in workplace AI use.

KVKK publishes guidance on agentic AI

On 12 March 2026, Türkiye’s Personal Data Protection Authority (KVKK) published guidance on agentic artificial intelligence, addressing the data protection implications of AI systems capable of autonomous decision‑making and action. The guidance explains the concept of agentic AI, the role of AI agents, and common use cases, while highlighting heightened risks linked to purpose limitation, data minimisation, transparency, accountability and security. KVKK stresses that personal data protection obligations apply throughout the entire lifecycle of agentic AI systems and calls on organisations deploying such technologies to review and strengthen their compliance, governance and human‑oversight arrangements.

Saudi Arabia

SAMA Commences Licensing of Fintech Companies to Provide Open Banking Services

On 26 March 2026, the Saudi Central Bank (SAMA) announced the commencement of licensing for fintech companies seeking to provide open banking services, following their successful participation in SAMA’s regulatory sandbox. The initiative forms part of SAMA’s efforts to enhance the efficiency, flexibility and innovation of the financial sector while ensuring secure and reliable services. Open banking enables customers to share their financial data, with consent, with licensed and supervised entities to access new financial products and services. SAMA emphasised that the licensing framework is designed to protect data privacy, strengthen trust and promote collaboration between banks and fintech firms, in line with the National Fintech Strategy under Saudi Vision 2030.

Key Dates on the Horizon

May 2026

  • Mid‑May 2026 WhatsApp must comply with DSA VLOP obligations. Designation

June 2026

  • 1 June 2026 Singapore–Japan Mutual Recognition of IoT Cybersecurity Labels Enters into Force. MoC Link

July 2026

  • 1 July 2026 DRC New Authorization Rules for Key Digital Services enforced. Article Link

August 2026

  • 2 August 2026    
    • Providers of GPAI models that have been placed on the market / put into service before this date need to be compliant with the EU AI Act by this date.* EU AI Act
    • Transparency requirements for certain AI systems under Article 50 of the EU AI Act expected to enter into force, following the development of guidelines and a voluntary code of practice.*EU AI Act

January 2027

  • 1 January 2027 China’s mandatory standard on information erasure in electronic products takes effect. More info

* Both these dates would be delayed/changed if proposal in the EU Digital Omnibus AI package get adopted. 

Additional information

This publication does not necessarily deal with every important topic nor cover every aspect of the topics with which it deals. It is not designed to provide legal or other advice. Clifford Chance is not responsible for third party content. Please note that English language translations may not be available for some content.

The content above relating to the PRC is based on our experience as international counsel representing clients in business activities in the PRC and should not be construed as constituting a legal opinion on the application of PRC law. As is the case for all international law firms with offices in the PRC, whilst we are authorised to provide information concerning the effect of the Chinese legal environment, we are not permitted to engage in Chinese legal affairs. Our employees who have PRC legal professional qualification certificates are currently not PRC practising lawyers.