Skip to main content

Clifford Chance

Clifford Chance

Global IP Updates

IP topics from around the globe

Deepfake: the fine line between fiction and reality

The term ‘deepfake’ describes a swapping technique, whereby original multimedia content –  i.e. images, videos, audio files – are superimposed onto existing contents. Deepfakes allow the characteristics and movements of a face and/or body to be recreated in an incredibly realistic way but also the faithful duplication of certain voices. The most frequently used deepfakes generate digital doppelgängers, look-alikes, so accurately that it is often difficult to distinguish the real image from the fake one. The result is a video in which it is possible to make the person on camera say anything the content creator wishes.

In particular, deepfake is a technology named after a form of deep learning which applies neural network simulation to large datasets. The deep learning technology is a type of machine learning where the AI "learns" to perform the task assigned to it. In practical terms, this involves selecting:

(i)    a "target video" to serve as the deepfake's basis; and

(ii)   two datasets, consisting of hundreds, if not thousands of frames of the subjects involved: the first one refers to the person to be replaced in the video while the second one relates to the person to be superimposed onto the original video.

The larger the dataset, the more accurate the final result: the AI algorithm encoder-decoder models are able to determine what a subject looks like from a variety of angles and in various environmental conditions, to map that subject and to eventually transpose it onto the target video.

Deepfake technology has the potential to be used for several purposes e.g. in the context of the entertainment industry or the health sector (i.e. it has been used to detect tumours) but it can also result in intervention in politics matters or hateful uses such as revenge porn. There is currently no legislation as such specifically regulating deepfakes. However, the Italian Data Protection Authority (Garante per la protezione dei dati personali) has addressed the issue at hand, warning about the malicious use of deepfakes – in particular in cases of cyberbullying and the so-called "deepnude" –  and suggesting the best practice for users to defend themselves.

In December 2019 the World Intellectual Property Organization (“WIPO”) published the “Draft Issues Paper On Intellectual Property Policy And Artificial Intelligence” which, inter alia, addresses deepfake contents in terms of intellectual property rights. The WIPO clarified that copyright for deepfakes should belong to their inventor/content creator but at the same time questioned whether copyright should be accorded to deepfake imagery at all. In fact, deepfakes certainly potentially infringe copyright but they can also cause even more severe issues, such as violations of privacy, personal data protection and human rights because in most cases the source person whose image and/or sound is used has not given their consent and therefore does not have a copyright interest in their own image.

In accordance with Article 5(1) of the EU General Data Protection Regulation (“GDPR”), inaccurate and/or false deepfake contents shall be erased or rectified without delay and even in the event that the deepfake is true or accurate, pursuant to Article 17 of GDPR data subjects may exercise the right to be forgotten, i.e. the data subject has the right to have the controller erase their personal data without undue delay and the data controller is obliged to erase the personal data in question.

Several social media companies, including Facebook, Instagram, Twitter and Reddit, have officially banned or planned to ban deepfakes from their platforms. However, some of these bans have extensive loopholes as deepfakes have become harder to detect and are now extremely easy to create. Moreover, the extent to which a deepfake is ‘harmful’ largely depends on a subjective evaluation of the content and on the opinion of the person(s) depicted.

In light of the above, regulation of the phenomenon may be necessary in order to prevent negative usage of the technology which can have – and has indeed already had – personal and professional implications for the individuals affected by it, besides the mere violation of data protection regulations.

Key issues

  • Deepfakes generate digital look-alikes so accurately that it is difficult to distinguish the real image from the fake one.
  • Current regulation of the phenomenon mostly relates to data protection and a dedicated source of law may be needed as deepfakes have serious implications for society and individuals.
  • Share on Twitter
  • Share on LinkedIn
  • Share via email
Back to top