The emergence of a new generation of digitally manipulated media capable of generating highly realistic videos – also known as deepfakes – has generated substantial concerns about possible misuse. In response to these concerns, this report assesses the technical, societal and regulatory aspects of deepfakes. The assessment of the underlying technologies for deepfake videos, audio and text synthesis shows that they are developing rapidly, and are becoming cheaper and more accessible by the day. The rapid development and spread of deepfakes is taking place within the wider context of a changing media system. An assessment of the risks associated with deepfakes shows that they can be psychological, financial and societal in nature, and their impacts can range from the individual to the societal level. The report identifies five dimensions of the deepfake lifecycle that policy-makers could take into account to prevent and address the adverse impacts of deepfakes. The legislative framework on artificial intelligence (AI) proposed by the European Commission presents an opportunity to mitigate some of these risks, although regulation should not focus on the technological dimension of deepfakes alone. The report includes policy options under each of the five dimensions, which could be incorporated into the AI legislative framework, the proposed European Union digital services act package and beyond. A combination of measures will likely be necessary to limit the risks of deepfakes, while harnessing their potential.

 

 

  1. Introduction The emergence of a new generation of digitally manipulated media has given rise to considerable worries about possible misuse. Advancements in artificial intelligence (AI) have enabled the production of highly realistic fake videos, that depict a person saying or doing something they have never said or done. The popular and catch-all term that is often used for these fabrications is ‘deepfake’, a blend of the words ‘deep learning’ and ‘fake’. The underlying technology is also used to forge audio, images and texts, raising similar concerns. Recognising the technological and societal context in which deepfakes develop, and responding to the opportunity provided by the regulatory framework around AI that was proposed by the European Commission,this report aims at informing the upcoming policy debate. The following research questions are addressed: 1 What is the current state of the art and five-year development potential of deepfake techniques? (Chapter 3) 2 What does the societal context in which these techniques arise look like? (Chapter 4) 3 What are the benefits, risks and impacts associated with deepfakes? (Chapter 5) 4 What does the current regulatory landscape related to deepfakes look like? (Chapter 6) 5 What are the remaining regulatory gaps? (Chapter 7) 6 What policy options could address these gaps? (Chapter 8) The findings are based on a review of scientific and grey literature, and relevant policies, combined with nine expert interviews, and an expert reviewof the policy options. 2. Deepfake and synthetic media technologies In this report, deepfakes are defined as manipulated or synthetic audio or visual media that seem authentic, and which feature people that appear to say or do something they have never said or done, produced using artificial intelligence techniques, including machine learning and deep learning. Deepfakes can best be understood as a subset of a broader category of AI-generated ‘synthetic media’, which not only includes video and audio, but also photos and text. This report focuses on a limited number of synthetic media that are powered by AI: deepfake videos, voice cloning and text synthesis. It also includes a brief discussion on 3D animation technologies, since these yield very similar results and are increasingly used in conjunction with AI approaches. Deepfake video technology Three recent developments caused a breakthrough in image manipulation capabilities. First, computer vision scientists developed algorithms that can automatically map facial landmarks in images,such as the position of eyebrows and nose, leading to facial recognition techniques. Second, the rise of the internet – especially video- and photo-sharing platforms – made large quantities of audio-visual data available. The third crucial development is the increase in image forensics capacities, enabling automatic detection of forgeries. These developments created the pre-conditions for AI technologies to flourish. The power of AI lies in itslearning cycle approach. It detects patterns in large datasets and produces similar products. It is also able to learn from the outputs offorensics algorithms, since these teach the AI algorithms what to improve upon in the next production cycle. Two specific AI approaches are commonly found in deepfake programmes: Generative Adversarial Networks (GANs) and Autoencoders. GANs are machine learning algorithms that can analyse a set of STOA | Panel for the Future of Science and Technology II images and create new images with a comparable level of quality. Autoencoders can extract information about facial features from images and utilise this information to construct images with a different expression (see Annex 3 for further information). Voice cloning technology Voice cloning technology enables computers to create an imitation of a human voice. Voice cloning technologies are also known as audio-graphic deepfakes, speech synthesis or voice conversion/swapping. AI voice cloning software methods can generate synthetic speech that is remarkably similar to a targeted human voice. Text-to-Speech (TTS) technology has become a standard feature of everyday consumer electronics, such as Google Home, Apple Siri and Amazon Alexa and navigation systems. The barriers to creating voice clones are diminishing as a result of a variety of easily accessible AI applications. These systems are capable of imitating the sound of a person’s voice, andcan ‘pronounce’ a text input. The quality of voice clones has recently improved rapidly, mainly due to the invention of GANs (see Annex 3). Thus, the use of AI technology gives a new dimension to voice clone credibility and the speed at which a credible clone can be created. However, it is not just the sound of a voice that makes it convincing. The content of the audio clip also has to match the style and vocabulary of the target. Voice cloning technology is therefore connected to text synthesis technology, which can be used to automatically generate content that resembles the target’s style. Text synthesis technology Text synthesis technology is used in the context of deepfakes to generate texts that imitate the unique speaking style of a target. The technologies lean heavily on natural language processing (NLP). A scientific discipline at the intersection of computer science and linguistics, NLP’s primary application is to improve textual and verbal interactions between humans and computers. Such NLP systems can analyse large amounts of text, including transcripts of audio clips of a particular target. This results in a system that is capable of interpreting speech to some extent, including the words, as well as a level of understanding of the emotional subtleties and intentions expressed. This can result in a model of a person’s speaking style, which can, in turn, be used to synthesise novel speech. Detection and prevention There are two distinct approaches to deepfake detection: manual and automatic detection. Manual detection requires a skilled person to inspect the video material and look for inconsistencies or cues that might indicate forgery. A manual approach could be feasible when dealing with low quantities of suspected materials, but is not compatible with the scale at which audio-visual materials are used in modern society. Automatic detection software can be based on a (combination of) detectable giveaways, some of which are AI-based themselves: Speaker recognition Voice liveness detection Facial recognition Facial feature analysis Temporal inconsistencies Visual artefacts Lack of authentic indicators The multitude of detection methods might look reassuring, but there are several important cautions that need to be kept in mind. One caution is that the performance of detection algorithms is often Tackling deepfakes in European policy III measured by benchmarking it against a common data set with known deepfake videos. However, studies into detection evasion show that even simple modifications in deepfake production techniques can already drastically reduce the reliability of a detector. Another problem detectors face is that audio-graphic material is often compressed or reduced in size when shared on online platforms such associal media and chat apps. The reduction in the number of pixels and artefacts that sound and image compression create can interfere with the ability to detect deepfakes. Several technical strategies may prevent an image or audio clip frombeing used as an input for creating deepfakes, or limit its potential impact. Prevention strategies include adversarial attacks on deepfake algorithms, strengthening the markers of authenticity of audio-visual materials, and technical aids for people to more easily spot deepfakes. 3. Societal context Media manipulation and doctored imagery are by no means new phenomena. In that sense, deepfakes can be seen as just a new technological expression of a much older phenomenon. However, that perspective would fall short when it comes to understanding its potential societal impact. A number of connected societal developments help create a welcoming environment for deepfakes: the changing media landscape by means of online sharing platforms; the growing importance of visual communication; and the growing spread of disinformation. Deepfakes find fertile ground in both traditional and new media because of their often sensational nature. Furthermore, popular visual-first social media platformssuch as Instagram, TikTok and SnapChat already include manipulation options such as face filters and video editing tools, further normalising the manipulation of images and videos. Concerningly, non-consensual pornographic deepfakes seem to almost exclusively target women, indicating that the risks of deepfakes have an important gender dimension. Deepfakes and disinformation Deepfakes can be considered in the wider context of digital disinformation and changes in journalism. Here, deepfakes are only the tip of the iceberg,shaping current developments in the field of news and media. These comprise phenomena and developments includingfake news, the manipulation of social media channels by trolls or social bots, or even public distrust ofscientific evidence. Deepfakes enable different forms of misleading information. First, deepfakes can take the form of convincing misinformation; fiction may become indistinguishable from fact to an ordinary citizen. Second, disinformation – misleading information created or distributed with the intention to cause harm – may be complemented with deepfake materials to increase its misleading potential. Third, deepfakes can be used in combination with political micro-targeting techniques. Such targeted deepfakes can be especially impactful. Micro-targetingis an advertising method that allows producers to send customised deepfakesthat strongly resonate with a specific audience. Perhaps the most worrying societal trend that is fed by the rise of disinformation and deepfakes is the perceived erosion of trust in news and information, confusion of facts and opinions, and even ‘truth’ itself. A recent empirical study has indeed shown that the mere existence of deepfakes feeds distrust in any kind of information, whether true or false. 4. Benefits, risks and impacts Deepfake technologies can be used for a wide variety of purposes, with both positive and negative impacts. Beneficial applications of deepfakes can be conceived in the following areas: audio-graphic productions; human-machine interactions (improving digital experiences); video conferencing; satire; personal or artistic creative expression; and medical (research) applications (e.g. face reconstruction or voice creation). STOA | Panel for the Future of Science and Technology IV Deepfake technologies may also have a malicious, deceitful and even destructive potential at an individual, organisational and societal level. The broad range of possible risks can be differentiated into three categories of harm: psychological, financial and societal. Since deepfakes target individual persons, there are firstly direct psychological consequences for the target. Secondly, it is also clear that deepfakes can be created and distributed with the intent to cause a wide range of financial harms. Thirdly, there are grave concerns about the overarching societal consequences of the technology. An overview of the risks identified in this research are presented in the table below.

 

 

  1. Cascading impacts The impact of a single deepfake is not limited to a single type or category of risk, but rather to a combination of cascading impacts at different levels (see infographic below). First, as deepfakes target individuals, the impact often starts at the individual level. Second, this may cause harm to a specific group or organisation. Third, the notion of the existence of deepfakes, a well-targeted deepfake, or the cumulative effect of deepfakes, may lead to severe harms on the societal level. The infographic on the next page depicts three scenarios that illustrate the potential impacts of three types of deepfakes on the individual, group and societal levels: a manipulated pornographic video; a manipulated sound clip given as evidence; and a false statement to influence the political process.

 

 

  1. Regulatory landscape and gaps The regulatory landscape related to deepfakes comprises a complex web of constitutional norms, as well as hard and soft regulations on both the EU and the Member State level. On the European level, the most relevant policy trajectories and regulatory frameworks are: The AI regulatory framework The General Data Protection Regulation Copyright regime e-Commerce Directive Digital services act Audio Visual Media Directive Code of Practice on Disinformation Action plan on disinformation Democracy action plan Even though the current rules and regulations offer at least some guidance for mitigating potential negative impacts of deepfakes, the legal route for victims remains challenging. Typically, different actors are involved in the lifecycle of a deepfake. These actors might have competing rights and obligations. The scenarios in Chapter 7 illustrate how perpetrators often act anonymously, making it harder to hold them accountable. It seems that platforms could play a pivotal role in helping the victim to identify the perpetrator. Moreover, technology providers also have responsibilities in safeguarding positive and legal use of their technologies. This leads to the conclusion that policy-makers, when aiming to mitigate the potential negative impacts of deepfakes, should take different dimensions of the deepfake lifecycle into account. 7. Policy options The report identifies various policy options for mitigating the negative impacts associated with deepfakes. In line with the different phases of the ‘deepfake lifecycle’, we distinguish five dimensions of policy measures: 1. the technology dimension, 2.the creation dimension, 3.the circulation dimension, 4.the target dimension, and 5.the audience dimension.

 

 

Technology dimension The technology dimension covers policy options aimed at addressing the technology underlying deepfakes – AI-based machine learning techniques – and the actors involved in producing and providing this technology. The regulation of such technology lies largely within the domain of the AI regulatory framework as proposed by the European Commission. The framework takes a risk-based approach to the regulation of AI. Deepfakes are explicitly covered in the Commission proposal as ‘AI systems used to generate or manipulate image, audio or video content’, which have to adhere to certain minimum requirements, most notably when it comes to labelling. They are not included in the ‘high risk’ category, and uncertainty remains whether they could fall under the ‘prohibited’ category. The current AI framework proposal thus leaves room for interpretation. Since this research has documented a wide range of applications of deepfake technology, some of which are clearly high-risk, clarifications and additions to the AI framework proposal are recommended. Options include clarification of which AI practices should be prohibited under the AI framework; creation of legal obligations for deepfake technology providers; and regulation ofdeepfake technology as high-risk (for a full overview of the policy options identified, see Table 3). Creation dimension This dimension covers the policy options aimed at addressing the creators of deepfakes, or in AI framework terminology: the ‘users’ of AI systems. The AI framework proposal already formulates some rules and restrictions for the use of deepfake technology, but additional measures are possible. Options include clarification of the guidelines for the manner of labelling; limiting the exceptions to the deepfake labelling requirement; and banning certain applications altogether. This dimension also addresses those who use deepfake technology for malicious purposes: the ‘perpetrator’. Malicious users of deepfake technology often hide behind anonymity and cannot be easily identified, thereby escaping accountability. These users cannot be expected to willingly comply with the labelling requirement as introduced in the AI framework proposal. Policy measures needed against malicious users of deepfake technology therefore may include extending current legal frameworks with regard to criminal offences, diplomatic actions and international agreements to refrain from the use of deepfakes by foreign states and their intelligence agencies (for a full overview of the policy options identified, see Table 3). Circulation dimension This domain covers the policy options aimed at addressing the circulation of deepfakes, by formulating possible rules and restrictions for the dissemination of (certain) deepfakes. Online platforms, media and communication services play a crucial role in the dissemination of deepfakes. The dissemination and circulation of a deepfake to a large extent determines the scale and the severity of itsimpact. Therefore, responsibilities and obligations for platforms and other intermediaries are often recommended. Policy options that address this dimension mainly fit within the domain of the proposed digitalservices act, and include obliging platforms and other intermediaries to have deepfake detection software in place; increasing transparency obligations with regard to deepfake detection systems, detection results, and labelling and take-down decisions; and slowing down the speed of circulation (for a full overview of the policy options identified, see Table 3). Target dimension Malicious deepfakes create impacts at the individual level, for the person(s) depicted in the deepfake. This research has demonstrated that the rights of victims may be protected in principle, but it often proves difficult to effect this. Therefore, we offer several options for improving the protection of the victims, including institutionalising support for victims of deepfakes; strengthening the capacity of data protection authorities to respond to the use of personal data for deepfakes; and developing a unified approach for the proper use ofpersonalityrightswithin the EuropeanUnion (for a full overview of the policy options identified, see Table 3). STOA | Panel for the Future of Science and Technology VIII Audience dimension Deepfake impacts transcend the individual level and can cascade to group or even societal levels. Whether this happens partly depends on the audience response: will they believe the deepfake, disseminate deepfakes further when they receive them, lose trust in institutions? The audience dimension is therefore the final crucial dimension for policy-makers to limit the risks and impacts of deepfakes. Options listed here include the labelling of trustworthy sources; and investing in media literacy and technological citizenship (for a full overview of the policy options identified, see Table 3). 8. Conclusions This research has identified numerous malicious as well as beneficial applications of deepfake technologies. These applications do not strike an equal balance, as malicious applications pose serious risks to fundamental rights. Deepfake technologies can thus be considered dual-use and should be regulated as such. The invention of deepfake technologies hassevere consequences for the trustworthiness of all audiographic material. It gives rise to a wide range of potential societal and financial harms, including manipulation of democratic processes, and the financial, justice and scientific systems. Deepfakes enable all kinds of fraud, in particular those involving identity theft. Individuals – especially women – are at increased risk of defamation, intimidation and extortion, as deepfake technologies are currently predominantly used to swap the faces of victims with those of actressesin pornographic videos. Taking an AI-based approach to mitigating the risks posed by deepfakes will not suffice for three reasons. First, other technologies can be used to create audio-graphic materials that are effectively similar to deepfakes. Most notably 3D animation techniques maycreate very realistic video footage. Second, the potential harms of the technology are only partly the result of the deepfake videos or underlying technologies. Several mechanisms are at play that are equally essential. For example, for the manipulation of public opinion, deepfakes need not only to be produced, but also distributed. Frequently, the policies of media broadcasters and internet platform companies are instrumental to the impact of deepfakes. Third, although deepfakes can be defined in a sociological sense, it may prove much more difficult to grasp the deepfake videos, as well as the underlying technologies, in legal terms. There is an inherent subjective aspect to the seeming authenticity of deepfakes. A video that may seem convincing to one audience, may not be so to another, as people often use contextual information or background knowledge to make a judgement about authenticity. Similarly, it may be practically impossible to anticipate or assess whether a particular technology may or may not be used to create deepfakes. One has to bear in mind that the risks of deepfakes do not solely lie in the underlying technology, but largely depend on its use and application. Thus, in order to mitigate the risks posed by deepfakes, policy-makers could consider options that address the wider societal context, and go beyond regulation. In addition to the technological provider dimension, this research has identified four additional dimensions for policy-makers to consider: deepfake creation; circulation; target/victim; and audience. The overall conclusion of this research is that the increased likelihood of deepfakes forces society to adopt a higher level of distrust towards all audio-graphic information. Audio-graphic evidence will need to be confronted with higher scepticism and have to meet higher standards. Individuals and institutions will need to develop new skills and procedures to construct a trustworthy image of reality, given that they will inevitably be confronted with deceptive information. Furthermore, deepfake technology is a fast-moving target. There are no quick fixes. Mitigating the risks of deepfakes thus requires continuous reflection and permanent learning on all governance levels. The European Union could play a leading role in this process.

 

 

  1. Deepfake and synthetic media technologies This chapter describes the technological aspects of photo- and video-graphic deepfakes, audio-graphic deepfakes (voice cloning) and text synthesis. 3.1. Photo- and video-graphic deepfake technology Photo- and video-graphic deepfakes are created by similar technologies. Videos are simply converted into photos by splitting every frame. Next, each image is manipulated separately. Image manipulation technology gradually evolved over time The methods and level of sophistication for such manipulations have gradually increased over the past decades. When computers were equipped with graphical user interfaces in the 1970s the first applications for image manipulation were developed as well. When Photoshop became popular in the 1990s a broad audience gained the ability to manipulate images. High-quality video manipulation, however, was until recently primarily conducted by professionals from the cinematographic industry and academics in the field of image processing. Automatic manipulations that are similar to what we understand as deepfakes today already started to appear in the 1990s, such as the Video Rewrite Program that synthesised facial animations of US president John F Kennedy in 1997. 34 As computing power increased over time, movie studios developed ComputerGenerated Imagery (CGI) technology and distributed the results in cinemas around the world. A wellknown example is the winner of the 2009 Academy Award for Best Visual Effects: The Curious Case of Benjamin Button. Throughout the entire movie, computer-aided manipulations of the face of actor Brad Pitt are used to create the illusion of reverse ageing. Recent breakthrough technological progress Three recent developments caused a breakthrough in image manipulation capabilities. First, computer vision scientists developed algorithms that can automatically map facial landmarks in images such as the position of eyebrows and nose, leading to facial recognition techniques. Simultaneously, the rise of the internet – especially video- and photo-sharing platforms, such as YouTube – made large quantities of audio-visual data available. Today, data sets are widely available containing large quantities of pre-labelled images and videos of celebrities. 35 This also explains why celebrities and public figures such as US President Barack Obama were among the first to appear in deepfake videos. 36 The third crucial development is the increase in image forensics capacities, enabling automatic detection of forgeries. The above-mentioned developments are three important pre-conditions for AI technologies to flourish. AI can gain from a learning approach when large data sets are available combined with the ability to gain feedback. Therefore, forensics algorithms are crucial. Two specific AI approaches are commonly found in deepfake programmes: Generative Adversarial Networks (GANs) and Autoencoders. GANs are machine learning algorithms that can analyse a set of images and create new images with a comparable level of quality. Autoencoders can extract

 

 

information about facial features in images, and utilise this information to construct images with a different expression. In Annex 3, we describe these techniques in more detail. 3D avatar animation technology 3D animation technology is increasingly able to generate videos with a similar quality to AI-based deepfake technology. Some deepfake programmes even combine AI image generation and 3D animation (see Paragraph on Trends). Most notably are avatar technologies that animate 3D models of a person’s head or entire body. These programmes first create a photorealistic 3D model either manually, or automatically by deriving the 3D landmarks from a single image or multiple images of a person. Next, the 3D model can be animated by capturing the movements from an actor, or by programmatically animating the model based on the interpretation of an audio-graphic speech fragment or text. 3D facial animation techniques were until recently mostly applied in cinema movies and computer games. In the past five years, the popularity of Virtual Reality and Augmented Reality technology has increased, due to the availability of equipment at a consumer-friendly price. Large technology companies, such as Facebook, are also investing in technological developments. Their desire to let users control a realistic virtual representation of themselves in a 3D environment has led to the development of products such as Facebook Codec Avatar. In demonstration videos, the company shows that it is difficult for an audience to tell the difference between a video of a real person and one that is generated using their 3D avatar technology (See Figure 1). 37

 

 

3.2. Specific graphical deepfake techniques Within the realm of deepfake techniques, several specific applications can be discerned. The technologies described above can, for example, be applied to specific parts of an image or entire frames from a video, resulting in specific outcomes that are often described as discrete deepfake techniques. In the table below we list frequently used terms that refer to these specific techniques, accompanied by a brief description. Below the table, a collection of examples is presented.

 

 

3.3. Voice cloning technology Voice cloning technology enables computers to create an imitation of a human voice. Voice cloning technologies are also known as audio-graphic deepfakes, speech synthesis or voice conversion/swapping. 53 AI voice cloning software methods can generate synthetic speech that is remarkably similar to a targeted human voice. Some believe that the difference between a real and a synthesised voice is becoming ‘imperceptible to the average person’. 54 The development of AI voice cloning software began decades ago when a number of methods were invented for computers to synthesise voice. These so-called Text-to-Speech (TTS) algorithms are able to convert text into spoken words. This allowed computers to use voice for interacting with humans. In many cases -such as announcement systems in train stations -traditional audio messages have been replaced by a TTS system, eliminating the need to pre-record every possible message and offering much greater flexibility. Traditionally there are two approaches to TTS: Concatenative TTS and Parametric TTS. 55Concatenative TTS utilises a database of audio clips containing words and sounds that can be combined to form full sentences. The resulting audio is understandable, but has a typical robotic ring to it. It is difficult to express emotions or use subtle intonations in Concatenative TTS which is normal in natural speech. Using Concatenative TTS to clone a voice requires a serious investment, as for every new voice a new database has to be built. Parametric TTS takes a different approach. Instead of using pre-recorded audio clips it uses a model of a voice. This model can be derived from recordings of a target, and is increasingly able to capture the characteristic sound and subtleties of a person’s pronunciation. Once a Parametric TTS system has been built to create a model of a specific target, it can be reused to create models of other targets as well. This greatly reduces the operational costs compared to Concatenative TTS. However, before the invention of modern AI techniques such as GANs (See Annex 3) this method yielded unconvincing results, and humans were able to quickly recognise that the resulting audio was an imitation. Today, Artificial Intelligence (AI) has enormously increased the quality of Parametric TTS-based voice cloning. TTS has become a standard feature of everyday consumer electronics. Popular TTS-based devices are voice assistants, such as Google Home, Apple Siri and Amazon Alexa and navigation systems. The barriers to creating voice clones are reducing due to a variety of easily accessible and reusableAIpowered tools such as Tacotron56, WaveNet 57, Deep Voice58, or Voice Loop. 59 These systems are capable of imitating the sound of any person’s voice, and can ‘pronounce’ a text input.An audio clip with just a few minutes of recorded speech can already be enough to extract the characteristic features of a person’s voice. The extracted information is used to create an AI voice model. Based on this model a computer can generate new audio clips in which any text could be pronounced with a sound that is

 

 

very similar to the target voice. For example, as part of an advertisement campaign for snacks, users can generate a custom video in which Argentine football celebrity Lionel Messi seems to speak English fluently. 60 The quality of the output by AI-based TTS systems are steadily improving. Nowadays, the models are able to learn based on the discovery of new patterns in audio data. The invention of GANs -which are also pivotal to the acceleration of graphic deepfakes (See Annex 3 for a detailed description of GANs) – has also accelerated the development of voice clones, resulting in increasingly convincing clones that are harder to detect by humans. Thus, the use of AI technology gives a new dimension to clone credibility and the speed at which a credible clone can be created. However, it is not just the sound of a voice that makes it a convincing clone. The content of the audio clip also has to match the style and vocabulary of the target. Voice cloning technology is therefore connected to the next paragraph on text synthesis technology, which can be used to automatically generate content creation that resembles the target’s style. 3.4. Text synthesis technology Text synthesis technology is used in the context of deepfakes to generate texts that imitate the unique writing and speaking style of a target. The technologies lean heavily on Natural Language Processing (NLP); a scientific discipline at the intersection of computer science and linguistics. Its primary application is to improve textual and verbal interactions between humans and computers. NLP systems can analyse large amounts of texts, including transcripts of audio clips of a particular target. This results in a system which is capable of interpreting a speech to some extent, including the words as well as a level of understanding of the emotional subtleties and intentions expressed.. This can result in a model of a person’s speaking style, which can in turn be used to synthesise novel speeches. Common architecture used in NLP is a deep learning algorithm called the Transformer.This algorithm is basically able to ‘transform’ an input text into a new text, by learning how the sequence of words relate to each other in sentences and texts.One of the most advanced in a series of language models built on this architecture is Generative Pre-trained Transformer 3 (GPT-3)61 created by OpenAI, a San Francisco-based artificial intelligence research laboratory. GPT-3 is a general-purpose NLP that has showed impressive performance with translation, question-answering, as well as with unscrambling words. The OpenAI researchers claim that ‘GPT-3 can even generate news articles which human evaluators have difficulty distinguishing from articles written by humans’. At present, large amounts of computing power, electricity and training data are needed to create GPT-3 models. This has led to scrutiny by prominent AI ethics researchers on the environmental impact of this technology.62 However, the OpenAI researchers state that once such a model is trained, it takes relatively low-power computers to use the model and generate large amounts (hundreds of pages) of text. 3.5. Trends in deepfake videos, voice cloning and text synthesis Since the inception of the term deepfakes less than five years ago, the concept itself and its predecessors have developed rapidly. There are a number of key drivers that have enabled a number of trends.

 

 

The key drivers are: Availability of datasets and computing power. The computer vision community has created large datasets with labelled visual material, and many of these are freely available on the internet. These datasets are necessary for training the machine learning algorithms. The creators of deepfakes can readily access these datasets, eliminating the time-consuming work of gathering and labelling material. Moreover, the required computing power for training machine learning algorithms is available at low cost due to cloud computing services. 63 Some services, such as Google Colab, actually provide enough computing power for creating short high-quality deepfake videos in a matter of hours. When utilising multiple Google accounts, it is possible to gain access to a significant amount of computing power at zero monetary costs. Thus, a regular computer or even a smart phone with internet access suffices for creating high-quality deepfakes. Accessibility of high-quality algorithms and pre-trained models. The academic community is accustomed to publishing work in open or easily accessible journals and code repositories, such as Github. This drives a strong uptake tendency by the creators of deepfake software. Additionally, pre-trained machine learning models are shared among deepfake creators. Models only need to be trained once and can be reused indefinitely, eliminating a time-consuming step of training models on datasets and eliminating partially the need for computing power. 5G connectivity. Across Europe telecom operators are launching the next generation of mobile connectivity networks. These 5G networks offer increased bandwidth, enabling users to stream and view video content at higher qualities as well as use portable virtual and augmented reality systems. Rise of 3D sensors. The latest generation of consumer electronics are equipped with 3D sensors. At first, these were mainly used for authentication purposes, such as unlocking smart phones by scanning the user’s face. The latest Apple iPhone and iPad now also contain general purpose 3D sensors that can be used to capture 3D information of entire scenes and scan objects. It is expected that the creators of deepfakes will soon benefit from obtaining large quantities of 3D data on their targets’faces. Cat-and-mouse game between producers and detectors. Paradoxically, increased image forensics and deepfake detection capabilities drive towards increased quality of deepfake videos. As described in the section on GANs (Annex 3), the algorithms that create deepfakes benefit from detectors due to the learning capacity based on feedback loops.64 This also explains why many of the scholarly articles on deepfake detection are published by the same authors that work on algorithms that create deepfake capabilities. This innovation cycle is further catalysed by the availability of shared libraries of deepfake videos, which are supplemented frequently with the products of the latest deepfake creation algorithms, and used to develop and benchmark new detection methods. 65 These drivers lead to a number of trends: Live real-time deepfakes. The additional bandwidth offered by new communication technologies such as 5G enable users to utilise the power of cloud computing to manipulate video streams in real-time. Deepfake technologies can therefore be applied in videoconferencing settings, live-streaming video services and television.

 

 

Supply and demand platforms for deepfakes. The strong media appeal and increased popularity of video media have created a market for manipulated videos that are facilitated by supply and demand platforms. There are special marketplaces on which users or potential buyers can post requests for deepfake videos. For example, requests for non-consensual pornography videos of celebrities are fulfilled on internet forums, and certain websitesare dedicated to sharing such videos. Commodification of deepfake tools. The availability of computing power and accessibility of high-quality algorithms lead to a rapid commodification of deepfake tools. Advanced deepfake software suites are freely distributed and accompanied by instructional materials, making it relatively easy for those with some background in computer programming to get started.66 Software suites for video manipulation also offer marketplaces for exchanging deepfake algorithms.67 Moreover, several easy-to-use smart phone applications exist, that require no technical know-how whatsoever. 68 There are even chat bots on platforms like Telegram that return a deepfake to anyone that sends them an image; disturbinglyand notoriously known for virtually undressing women, including under-age victims. 69 Deepfake as a service companies. The increased demand for deepfakes has also led to the established of several companies that deliver deepfakes as a product or even online service. On platforms like Synthesia and Rephrase anyone can generate videos, based on text input and a target video. These services are intended for use by marketers to personalise videos, eliminating the need to record a video for each recipient. Essentially, these services make producing a deepfake video as easy as editing text. AI and 3D animation hybrids. The advent of photorealistic 3D avatar technology offers clear synergetic opportunities when combined with AI-based deepfake technology. There are already publications and services on the market that show that deepfake creators combine both approaches(See Figure 7). 70 Reduced input requirements. There is a trend among deepfake creators to develop algorithms that can generate high-quality output, based on very little input. For example, some algorithms seem capable of generating deepfake videos based on a single picture of the target, or generate audio speeches that convincingly resemble the target’s voice based on only a few seconds of audio. 71 This means that the availability of large quantities of visualdata of a particular person is no longer a requirement, making anyone with only a small number of audio-visual representations on the internet a potential target.

 

 

3.5.1. Five-year future scenario and risk development When projecting the trends and drivers described above into the future,a scenario starts to form. Most likely the tools for creating deepfakes will become abundantly available and easy to use within a matter of years.Already we see that smartphone appsthat unlock only a part of the potential of the technology quickly become wildly popular. FaceApp for example allows users to change their images, such as appearing older. It was downloaded over 150 million times in mid 2019. 73 In 2021 the app Wombo that applies lip-sync technology to imagesin order to create satiric videos was downloaded over 2 million times in the first two weeks after its release. 74 Therefore, it is expected that the functionalities of these apps will be adopted by mainstream software and become part of the everyday use of social media within the next five years. The current rise of deepfake-as-a-service companies, and the uptakeby large corporations like SAP75, means deepfake videos and audio will be commonly used in software products and games. This mainstreaming effect of the technology means that a large part of the European population will become familiar with the technology in the near future. The expected sharp increase in availability will also translate into a much higher likelihood of abuse. Whereas today there are only few examples of high-profile incidents linked to deepfake techniques, such as the attempted coup-d’etat in Gabon, and non-consensual deepfake pornography mainly targeted at female celebrities, this will likely become more widespread. Preventative strategies, such as raising awareness of the existence of deepfakes and filtering of nefarious deepfakes by social media platforms, will reduce some of the potential impact. However, given the cat-and-mouse-game dynamic between deepfake creators and detectors, it is likely that advanced actors will still be able to create undetectable forgeries and mislead their targets

 

 

Thus, within the next five years, the nefarious use of deepfake technology will probably develop from a high impact, but low likelihood risk, into a high impact with moderate to high likelihood risk. At the same time, the lowering of barriers for the use of deepfake technology will also catalyse its use for beneficial purposes. For example, it is likely people will more often encounter life-like avatars that serve as virtual assistants. The current virtual assistants such as Google Home and Amazon’s Alexa might be extended with a screen on which a human-like image is visible, which creates the illusion of having a conversation with a (familiar) person instead of a robot. The integration of deepfake technology in augmented reality systems may introduce new risks, that are not yet understood. Suppose a user has selected an avatar with the voice of a relative for the presentation of news from sources the userhasselected. This could lead to a scenario in which a trusted person seems to pronounce disinformation. Although the psychological effects of having a trusted person present disinformation are not yet understood, it could be expected that this opens up new avenues for manipulation and accompanying risks. 3.6. Detection software and technical prevention strategies Public concern about the potential risks of deepfakes has created a demand for detection and prevention. 76Detection systems are necessary whenever manipulated materials are used as evidence, for example in court and insurance cases 77 or news reporting.78 Prevention is also necessary, as it has been proven difficult to correct false information once thepublic has been exposed to it. 79 In the following section, we will discuss common detection approaches, their limitations, and several common technical prevention strategies. 3.6.1. Detection technology There are two distinct approaches to deepfake detection: manual and automatic detection. 80 Manual detection requires a skilled person to inspect the video material and look for inconsistencies or cues that might indicate forgery. Another logical approach that some have attempted to automate is to compare other audio-graphic material of the same event. 81A manual approach could be feasible when dealing with low quantities of suspected materials. However, this approach is not compatible with the scale at which audio-visual materials are used in modern society. Therefore, it is not a feasible solution at a societal level. Automatic detection software can be based on a (combination of) detectable giveaways: Speaker recognition. Recognition is based on both identification and verification. A speaker identification system can be used to determine who the speaker is, with just audio as an input. An automatic speaker verification (ASV) system verifies if the voice of a speaker matches the

 

 

claimed identity. These technologies are often based on comparing new audio fragments to previously determined voice prints in a database.82 Voice liveness detection. This technology is able to detect whether the sound of a voice comes from a live person that is speaking, or a pre-recorded clip. Even when voice clones are indistinguishable to the human ear, these kind of (AI-based) tools can detect artefacts that are not present in the sound of a live voice. 83 These technologies are still applicable, yet become less reliable when the quality of the audio is reduced, such as low quality telephone conversations or radio interviews. Facial recognition. Software that is used to identify people in photographic materials can also be applied to suspected forged materials. 84Deepfake algorithms often stretch or wrap faces85, or only adjust distinct features when creating morphs 86, resulting in irregularities. Whenever facial recognition software fails to identify the person that is claimed to be portrayed it might be an indication of forgery. Facial feature analysis. Researchers are developing algorithms for practically all facial landmarks, such as the position and movement of the nose, mouth and eyes, to spot artefacts caused by deepfake manipulations. The scientific literature on image forensics, for example, contains papers describing deepfake detectors that analyse the lack of eye-blinking87 or recognise manipulated eyebrows. 88 Temporal inconsistencies. Since deepfake videos are often created by modifying each frame of a movie separately, detectable inconsistencies may occur between frames. 89 For example, deepfake algorithms could cause sudden changes in head pose90, inconsistent lip movement91 or other unnatural movements of facial landmarks. Visual artefacts. Whenever an image is partially modified the deepfake algorithm must somehow create a transition between the original material and the manipulation. This often results in a blurry area 92, for example between the object in the foreground and its background.93

 

 

Also, when there are only few source materials,the algorithm might have to guess what certain expressions look like on a target’s face. In all of these and similar cases, the algorithm may leave detectable artefacts in the output. These artefacts or patterns of artefacts can be detected by algorithms.94 Also, deepfake algorithms that are trained to generate synthetic images often fail to deliver realistic backgrounds. 95Andaccounting for changes in illumination still proves to be a challenge to some deepfake algorithms. Lack of authentic indicators. Camera sensors consist of tiny pixels, that vary slightly in sensitivity due to the manufacturing process of the chips. These variations result in a sort of noise watermark that can be detected in every image or video that is made with a camera. Deepfake algorithms often disrupt this detectable pattern, which is an indication of forgery.96 3.6.2. Detection limits The extensive literature on deepfake detection methods might look reassuring, but there are several important cautions that need to be kept in mind. 97 First, the performance of detection algorithms is often measured by benchmarking against a common data set with known deepfake videos, such as the FaceForensics++ database which contains 1.8 million samples. 98However, a high confidence level in discriminating videos from such a dataset with known deepfakes does not guarantee a trustworthy performance on entirely new materials. In practice, it turns out that detectors are often good at spotting one kind of deepfake.99 Studies into detection evasion show that even simple modifications can drastically reduce the reliability of a detector.100 Another problem detectorsface is that audio-graphic material is often compressed or reduced in size when shared on online platforms such associal media and chat apps. The reduction in the number of pixels and artefacts that sound and image compression create can interfere with the ability to detect deepfakes. 101 Also, smartphone camera apps often have enabled filters by default, automatically modifying every image or video, nullifying the very notion of the existence of an authentic image in the first place.

 

 

Automatic speech verification (ASV) systems, which are an active field of research and development, also have serious shortcomings. ASV systems are good at dealing with classic forms of attacks, such as replay and impersonation by another human actor. However, ASV systems are less effective against AIbased attacks 102 and need to increase the use of AI in order to improve detection capabilities. 103 The most pressing need is to develop a uniform forgery assessment methodology. Current detection systems which are based on only one countermeasure will not suffice. Voice cloning technology will continue to progress. Therefore monitoring technological progress and continuous integrations into a holistic and efficient detection system are needed. 104 3.6.3. Technical prevention strategies There are several technical strategies that may prevent an image or audio clip from being used as an input for creating deepfakes or may limit its potential impact. Prevention strategies include adversarial attacks on deepfake algorithms, and strengthening the markers of authenticity of audio-visual materials and technical aids for people to more easily spot deepfakes. In this section, each strategy is described in more detail. Adversarial attacks on deepfake algorithms are methods that exploit vulnerabilities in computer vision algorithms. It is more or less the digital equivalent of a person wearing makeup to prevent identification by facial recognition cameras. The technology works by adding specific noise patterns to imagesas an overlay. The overlays are indistinguishable to the human eye. Computer vision algorithms, however, detect the noise and can be fooled into believing these are real features of the image, which will for example hamper the ability to correctly detect an object. This approach has been demonstrated as effective against some deepfake algorithms.105However, applying these techniques often requires the attacker to have some knowledge about the detector algorithm, in order to create effective deceptive overlays. Therefore, it is not suitable as a generic prevention against manipulation. The strengthening of authenticity markers of audio-visual content is often based on somehow registering authentic content or (digitally) watermarking audio-visual materials. Some argue that blockchain or distributed ledger technology (DLT) could be used to register original materials or a unique identifier. 106 Just as the British company Provenance aims to increase the transparency of product supply-chains by registering the origin of every ingredient or component in a blockchain database, a similar system could be created for the supply of audio-graphic information. However, these initiatives often overlook that this solution introduces many new vulnerabilities, such as attacks on the integrity of the DLT itself, or the dependency on technicians and organisations that will be responsible for operating such a system. Also, for these solutions to be effective, there must be a link between the register and the recipient of information, which is not very feasible given the enormous number of devices and software people use to consume audio-graphic materials. Despite these difficulties, some initiatives are exploring the implementation of this approach.1

 

 

Another approach that could authenticate audio-visual materials is embedding a (digital) watermark in the audio or graphic file itself.108 This could even be implemented in camera chips. Camera chips already have a unique variation in pixel sensitivity that results in a detectable noise pattern. Finally, harm could be prevented by supporting people to spot deepfakes. Some initiatives aim to raise awareness and build such capacity by conducting ‘prebunking’ interventions. This approach entails exposing people to clearly labelled potential deepfakes . Often, malicious deepfake creators follow a common pattern, such as continuously repeating a certain narrative or targeting a particular individual. By informing people about the existence of such misinformation, they may become more critical and resilient when confronted with such videos. This approach has been shown to reduce susceptibility to traditional (non-audio-graphic) misinformation.109 Several institutions have also developed training software purposely built with this aim.

 

 

5.2. Risks, harms and impact Deepfake technologies are enabled by AI technologies and may also have a malicious, misleadingand even destructive potential at an individual, organisational, and societal level. The interviewed experts for this research felt that the term ‘deepfake’ itself has an inherent negative connotation, pointing towards widely-perceived negative impacts and malicious outcomes of deepfake technologies. They say that deepfakes may irritate, humiliate, and even spur violence. The interviewee,Justus Thies,stated that, ‘Deepfakes are a poor outgrowth of synthetic media’ with a ‘malicious strand to exploit AI technology’. 159 Misuse and abuse of deepfake technologies is therefore giving rise to calls for criminalisation.One interviewee explicitly stated that we should not only speak about risks but rather about the dual-use of deepfake technologies. In the strict sense, dual use means a technology can serve civilian and military purposes. In the broader sense, it means it has beneficial and malicious uses. Both meanings apply to deepfake technology. The broad range of possible risks can be differentiated into three categories of harm: psychological, financial and societal risks. 160 Since deepfakes target individual persons, there are firstly direct psychological consequences for the target. Second, it is also clear that deepfakes are created and distributed with the intent to cause a wide range of financial harms. And thirdly, there are grave concerns about the overarching societal consequences of the technology. A

 

 

It is important to note thatsome of the identified risks relate to harms thathave already materialised, others – mainly on the societal level – are entirely plausible with today’s technology and are likely to materialise in the future if no measures are taken and deepfake technologies become more readily accessible or broadly used. 5.3. Risk of psychological harms The creation and publication of a deepfake may cause severe psychological harm to the represented individual. The smearing videos could be used for bullying, defamation and intimidation, which could cause profound reputational and psychological damage. The first applications of deepfake technology arose in a pornographic context, by editing the faces of celebrities into sex videos without their consent. The potential harms of these videos can be similar to revenge pornography; a form of cybercrime offence. According to Šepec & Lango (2020) revenge pornography refers to ‘non-consensual dissemination of intimate images that were taken with the consent of an individual but with the implicit expectation that these images would remain private’. 162 Whereas anyone could become a victim of revenge pornography,several interviewees stress thatnonconsensual deepfake pornography has a strong gender dimension as it seems to target almost only women. Individuals portrayed in such videos may also suffer collateral consequences and reputational sabotage or loss of opportunities, for example in the job market. It has also been described as a strategy for silencing speech. 163 Deepfakes may deepen a problematic social phenomena know as social cooling, which means that people avoid seeking public attention because of the risk of becoming a target of deepfakes. Another important consequence of convincing manipulated videos and audio of people doing or saying things they never did or said is their use for extortion. By threatening to expose fabricated content the perpetrators gain power over their victims, for example demanding a fee or following instructions. Thus not just a deepfake itself, but also the use of a deepfake by a malicious actor can cause severe psychological harms. When pornographic images are used for extortion, this is known as sextortion. In addition to psychological harms to the individuals targeted by deepfakes, there are psychological harms to society at large. When people are made aware of the very existence of deepfakes it has been shown to undermine trust in visual media. 164 Victims of extortion also describe a general increase of distrust towards others, as it may not always be clear whom the perpetrator is. 165 Even the threat of becoming a future victim of (s)extortion may already cause psychological harm. 5.4. Risk of financial harms The emergence of deepfakes also gives rise to several risks of financial harms. First, the harms of the above described (s)extortion practices may well extend from the psychological domain into the financial domain. Moreover, criminal actions are mostly financially driven. This financial harm may be inflicted on individuals as well as organisations, as employees could be corrupted by extortion. As the

 

 

creation of deepfake videos can be automated, the process of (s)extortion may also be automated and rapidly scale. 166 Moreover, deepfake technology may be used to steal identity by attacking biometrics in the verification process for online banking transactions,or of employees in an organisation. This new form of identity theft could be used for various goals, such as creating convincing imitations of superiors giving orders or directions to employees. A well-known example is the case of a money transfer of 243.000 British poundsto a Hungarian bank account.167 Using voice cloning technology, the attacker had pretended to be the CEO of a United Kingdom (UK)-based energy firm and asked the firm’s chief to make the transfer. It is also conceivable that criminals could obtain trade secrets, passwords or other important information from organisations in this way, resulting in substantial information security risks and subsequent financial harms. 168 These types of scams can affect businesses, but also individuals and families. In an evolved version of the ‘grandma scam’ for instance, criminals are using deepfakes to act as a family member who needs emergency funds.169 Deepfakes can also enable numerous other methods of fraud. A deepfake video could depict a chief executive inciting hatred, insults or other immoral or illegal behaviour. False statements could also be made about alleged company takeovers or mergers, about financial losses or bankruptcy.170 When these frauds target publicly traded companies, this may result in stock market manipulation. It is conceivable, that even if a company makes a timely clarifying statement, brand or reputation damage could still be the consequence, from which the company may not fully recover. 5.5. Risk of societal harms This risk category is a receptacle of potential adverse impacts of deepfakes in multiple societal sectors and institutions. Vulnerable societal sectors include those that rely heavily on documented evidence, such as insurance, journalism, media and education, and societal and economic systems such as the financial market, the criminal justice system, the political and the science systems. The paragraphs below elaborate on the kinds of harm that could be expected in these contexts. Manipulation of news media The risks of deepfakes are often linked to the potential harms of mis- and disinformation171, recognising the potential to manipulate news media. Deepfake disinformation could for example comprise of attempts to influence public opinion, gather fake campaign donations, and slander public figures. Researchers have demonstrated that a carefully designed deepfake video has a political effect. 172 In their paper, ‘Language Models are Few-Shot Learners’ 173,the researchers describe in detailthe possible harmful effects of their text synthesis system GPT-3. They warn that the ‘high-quality text generating capability of GPT-3 can make it difficult to distinguish synthetic text from the human-written text’. The

 

 

authors point to several scenarios of misuse. Their list includes misinformation as well as other fraudulent writing. Another risk concerns biases from training data that end up in the models, for example stereotypes and prejudices. It was, for example, found that the models associate words with religious terms that reflect a negative bias towards some religions; for example, words such as ‘violent’, ‘terrorism’ and ‘terrorist’ were associated at a higher rate with Islam than with other religions. The authors believe additional bias prevention measures are necessary to prevent harm. Critics state that the text-synthesis technology community is not investing enough in creating high-quality training sets, based on the false assumption that gathering more training data will always lead to better models. They recommend ‘encouraging research directions beyond ever larger language models’. 174 In addition to such harm to society at large, deepfake disinformation also confronts journalists with the challenge to fulfil their ethical and moral duty to report the truth, which means an increased burden on their part to determine the authenticity of text, audio and graphic materials. Damage to economic stability Manipulated news media in turn can damage economic stability. For example, synthetically generated statements about the dispute between Saudi Arabia and Russia regarding oil production quotas could have a negative impact on the price of oil and thus on the global economy. However, the severity of such an impairment of financial markets depends to a great extent on other factors than the quality of the deepfake itself. Bateman concludes there is ‘no serious threat to the stability of the global financial system or on national markets in mature healthy economies’. 175Developed countries would be more likely to be affected in already unstable situations, such as an ongoing economic crisis. In contrast, less developed countries, or rather emerging markets, are exposed to greater danger, as the assumed lack of stabilising institutions make them more susceptible to manipulations. Damage to the justice and science system Deepfakes may also damage the justice and science systems. Deepfake videos, voice clones and synthetic texts could be used to create false evidence in criminal court cases or used as evidence for scientific claims. As fraud has plagued science and the courtsfor years, it has already been criminalised. However, deepfakes may be much harder to detect. Deepfakes therefore raise serious concerns regarding the fundamental ‘credibility and admissibility of audio-visual footage as electronic evidence before the courts’. 176 Even when existing validation procedures for audio and video evidence are able to detect deepfakes, the very existence of deepfakes may still influence testimonies, because people may still testify based on what they saw or heard in a deepfake outside of the court. Erosion of trust The potential manipulation of news media, science and the justice system leads to a much wider concern of a general erosion of trust in society. It is feared that deepfakes may lead to a situation in which trustworthy information no longer exists.177 This general loss of trust of people in any kind of information is sometimes referred to as ‘information apocalypse’ or ‘reality apathy’

 

 

The conviction that what we see does not reflect the truth, can lead in the final instance to the point that ultimately even truth will not be believed. 179 This effect is also described as the ‘liars dividend’: those spreading doubt and uncertainty ultimately benefit, because they gain in the ability to mask the truth. The potential use of deepfakes for this purpose means that the technology introduces a new instrument for malicious politicians to gain power at the cost of citizens and journalists. Damage to democracy The erosion of trust created by deepfakes is especially disturbing as we live in a time where there is already distress about disinformation campaigns targeting democracies. Deepfakes can be expected to damage democracy in several ways, especially the public debate, elections, the legitimacy of democratic institutions180 and the power of citizens181 and politicians. 182 In the following paragraphs these potential problems are described in more detail. The potential manipulation of news media is problematic as it ties directly into a vital process of democracies: public debate. The integrity and quality of public debate is crucial as it is the main instrument for citizens to formulate their political opinions. However, in order for a public debate to function, there has to be some common sense of reality, which includes a common sense of what the public debate is about, who is participating, and what positions these participants represent. Deepfakes may manipulate all these aspects of the common sense of reality.183 There are also debates on the strengthening effect of deepfake technologies on a general change in the culture of the public debate through fragmentation and polarisation of the digital communication. Deepfakesspread by micro-targeting have framing effects on people, who only believewhat fits with their own world view; a phenomenon which is also called ‘echo chambers’. 184 This could also be used for political manipulation and targeted propaganda. In addition, interviewees indicate this kind of disinformation increases the rise of conspiracy theories. Deepfakes may also inflict long-lasting damage on the reputation of public figures, including politicians and other elected officials, thereby leading to a manipulation of elections. In 2019 for example, a deepfake video was circulating widely in Malaysia that depicts a political aide who seems to admit having had a homosexual relation with a cabinet minister. The video also includes a call for investigating alleged corruption by the minister and led to a destabilisation of the coalition government. 185 The manipulation effect on elections will be most likely if the attacker distributes a deepfake in such a way that there is enough time for it to circulate, but not enoughtime for the target

 

 

to deflate it . Examples of such disinformation interventions have been found in the elections in the United States in 2016, and in France in 2017. 186 Damage to national security and international relationships Deepfakes may also exacerbate social divisions, civil unrest, panic and conflicts, undermine public safety and national security. 187 At the worst, this could cause violent conflicts, attacks on politicians, governance breakdown or threats to international relations. In 2018 for example, a video with Ali Bongo Ondiba, the President of Gabon, was published online. For monthsbefore he had not been seen in public and it had become a popular believe that he was in poor health, or even dead.. The video led to a national crisis. 188A story that the video was a deepfake gained momentum, as it seemedto support a theory that the government was trying to hide the condition of the President. Ultimately, this story led to an unsuccessful coup d’état by the Gabonese military. These examples show that deepfake videos could likely cause domestic unrest and protests. It is also conceivable that deepfakes even lead to damage to international relationships or international armed conflicts if governments engage in military actionsbased on false information.189 5.6. Cascading impacts The impact of a single deepfake is not limited to a single type or category of risk, but rather to a combination of cascading impacts at different levels (See Figure 9). First, as deepfakes target individuals, the impact often starts at the individual level. Next, this may lead to harms in a group or organisation. Thirdly, the notion of the existence of deepfakes, a well-targeted deepfake or the accumulative effect of deepfakes may lead to severe harms atsocietal level. The infographic below depicts three scenarios that illustrate the potential impacts of one type of deepfake on different levels: 1 Pornographic manipulated video. In this scenario a pornographic video is used as a backdrop to blackmail. The potential direct impact is reputational and psychological damage to the person portrayed. The blackmail may extend to the group -for example the family -this person belongs to, or the company this person is associated with. Once pornographic deepfakes become more common, atsocietal level this category of deepfakes may have an adverse impact on sexual morality. 2 Manipulated sound clip as evidence. The latest advancements in audio-graphicdeepfakes mean that anyone who has published recordings of their voice could be fabricated into an audio recording that may serve as evidence to make a person look suspicious. On the individual level, getting involved in a court case based on manipulated evidence obviously would have severe consequences. At the organisational level, this means that courts will have to adjust their processes of authenticating evidence or perhaps dismiss audio-graphic evidence completely. This may hamper the course of justice and undermine the functioning of the court system as a whole. 3 False statement to influence politics. In a recent study that demonstrated the political effects of a deepfake video, a manipulated statement about religion by a Christian-Democrat politician was used. 190 It shows that this kind of deepfake may lead to reputational damage for the politician as well as a loss of trust in the political party. When such deepfakes are used at large, damage to the public debate is likely, and in the long run, even the position of democratic institutions such as the parliament and the integrity of elections are at stake.

 

 

7.4. Conclusion There are some important lessonsto be learned from the three scenarios presented in this chapter. The scenarios illustrated how the impact of a single deepfake often exceeds the personal level. The broad societal impact of deepfakes is almost never limited to a single type of risk, but rather to a combination of cascading impacts at different levels. We have also seen that, even though the current rules and regulations offer at least some guidance for mitigating potential negative impacts of deepfakes, the legal route for victims remains challenging. Typically, there are different actors involved in the lifecycle of a deepfake. These actors might have competing rights and obligations. The scenarios illustrated how perpetrators often act anonymously, making it harder to hold them accountable. It seems that platforms could play a pivotal role in helping the victim to identify the perpetrator. Moreover, technology providers also have responsibilities in safeguarding positive and legal use of their technologies. This leads to the conclusion that when aiming to mitigate the potential negative impacts of deepfakes, policy-makers should take different dimensions of the deepfake lifecycle into account. These dimensions will be introduced in the next chapter.