<a href="https://www.thenational.ae/world/europe/donald-trump-deepfake-addresses-democracy-conference-in-copenhagen-1.880343">Deepfakes</a>, or synthetic media, are video clips that have been manipulated using artificial intelligence. Using these sophisticated machine-learning algorithms, we can essentially turn human beings into ventriloquist's dummies. With a few photographs of our intended target and an audio file, deepfake algorithms can produce ultra-realistic fake footage of people saying and doing things they never said or did. I recently watched Albert Einstein giving a lecture he never gave, and Grigori Rasputin singing a song he never sang – Beyoncés hit <em>Halo</em>. But what are the psychological and social implications of this emerging technology? Beyond entertainment and the "Wow, it's so real" factor, I also experience a sense of fear and foreboding when watching these clips. The technology is capable of taking propaganda, defamation and misinformation to whole new levels. Many of the current<a href="https://www.thenational.ae/arts-culture/this-is-not-barack-obama-are-we-ready-for-the-wild-world-of-deepfakes-1.883676"> deepfakes</a> out there are easily identifiable as such, or clearly labelled. However, I can imagine a time when they won't be – perhaps that time has already arrived? Deepfaking is only in its infancy. It's easy to envision second and third-generation software that will produce material that is even more believable. Facebook co-founder <a href="https://www.thenational.ae/business/technology/deepfake-video-of-facebook-s-mark-zuckerberg-will-not-be-taken-down-1.873716">Mark Zuckerberg</a> was recently the target of a deepfake, which depicted him as sinister and megalomaniacal. The puppet masters, the team behind this state-of-the-art fake footage, had him declare to the camera: Whoever controls the data, controls the future". Also, in a slightly lower-tech incident earlier this year, a doctored video of Nancy Pelosi, speaker of the United States House of Representatives, went viral on social media. The manipulated footage made her look and sound drunk, highlighting the threat that this technology can pose to people's reputations. Humiliation is a powerful and painful emotion, and deepfakes can be used to embarrass, harass and even blackmail their targets. These videos are becoming easier to create increasingly realistic. The technology won't be limited to targeting celebrities – personal deepfakes are already here. In 1968 Andy Warhol predicted that "In the future, everyone will be world-famous for 15 minutes". It now seems that many of us will be infamous too. I can imagine enraged parents punishing teenagers for infractions they didn't commit, but appear to have, while youngsters attempt to explain deepfake and AI-generated media, to no avail. Malicious defamation, being falsely made to look bad, can have negative consequences for our mental health. Even after the fakeness of the footage has been established, the victims of reputational attacks may be stigmatised and psychologically scarred, indefinitely. The rise of the deepfake is also likely to erode trust in the media. If we can't trust our own eyes and ears, then what can we trust? Philosophers talk about an "epistemological crisis", the idea that we no longer know which sources of knowledge are sound. Deepfakes will only deepen this feeling, undermining the certainty of our own senses, leaving a cloud of doubt over much of the information we consume. In my imagined worst-case scenario, society becomes a dystopia of distrust, in which paranoia is the norm, and nobody is really sure about anything any more. Another likely consequence of deepfakes is that they will become a handy defence for people who legitimately get caught out saying or doing things they later regret. Claiming that the embarrassing or even incriminating footage of you is a deepfake will become a well-worn escape route. Similarly, many of us will dismiss as deepfake anything that displeases us, while taking a far less critical view of footage that aligns with our current worldviews and preferred narratives. With the 2020 US election on the horizon, concerns about deepfakes being used to manipulate public opinion and influence electoral outcomes are mounting. Last month US congress introduced a deepfake bill. A collaboration between computer scientists, disinformation specialists and human rights advocates, it proposes urgent and decisive action to curb the proliferation of malicious deepfakes. One of the proposed measures is to require that the software used to create deepfakes automatically add a watermark, alerting viewers of their inauthenticity. Another step is for social media platforms to be more proactive in detecting and removing deepfakes. A third measure proposes punishments, fines and jail time, for those who create and disseminate malicious deepfakes. Google CEO Sundar Pichai believes that AI will have a more significant impact on humanity than the discovery of fire. We get light and warmth and cooked food from fire, but fire also kills. Deepfakes are to AI what the flamethrower is to fire – a dangerous and powerfully destructive weapon when placed in the wrong hands. <em>Justin Thomas is a professor of psychology at Zayed University</em>